Nov 21 23:42:27 np0005531754 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 21 23:42:27 np0005531754 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 21 23:42:27 np0005531754 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 21 23:42:27 np0005531754 kernel: BIOS-provided physical RAM map:
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 21 23:42:27 np0005531754 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 21 23:42:27 np0005531754 kernel: NX (Execute Disable) protection: active
Nov 21 23:42:27 np0005531754 kernel: APIC: Static calls initialized
Nov 21 23:42:27 np0005531754 kernel: SMBIOS 2.8 present.
Nov 21 23:42:27 np0005531754 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 21 23:42:27 np0005531754 kernel: Hypervisor detected: KVM
Nov 21 23:42:27 np0005531754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 21 23:42:27 np0005531754 kernel: kvm-clock: using sched offset of 5046684261 cycles
Nov 21 23:42:27 np0005531754 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 21 23:42:27 np0005531754 kernel: tsc: Detected 2799.998 MHz processor
Nov 21 23:42:27 np0005531754 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 21 23:42:27 np0005531754 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 21 23:42:27 np0005531754 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 21 23:42:27 np0005531754 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 21 23:42:27 np0005531754 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 21 23:42:27 np0005531754 kernel: Using GB pages for direct mapping
Nov 21 23:42:27 np0005531754 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 21 23:42:27 np0005531754 kernel: ACPI: Early table checksum verification disabled
Nov 21 23:42:27 np0005531754 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 21 23:42:27 np0005531754 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 21 23:42:27 np0005531754 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 21 23:42:27 np0005531754 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 21 23:42:27 np0005531754 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 21 23:42:27 np0005531754 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 21 23:42:27 np0005531754 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 21 23:42:27 np0005531754 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 21 23:42:27 np0005531754 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 21 23:42:27 np0005531754 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 21 23:42:27 np0005531754 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 21 23:42:27 np0005531754 kernel: No NUMA configuration found
Nov 21 23:42:27 np0005531754 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 21 23:42:27 np0005531754 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 21 23:42:27 np0005531754 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 21 23:42:27 np0005531754 kernel: Zone ranges:
Nov 21 23:42:27 np0005531754 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 21 23:42:27 np0005531754 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 21 23:42:27 np0005531754 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 21 23:42:27 np0005531754 kernel:  Device   empty
Nov 21 23:42:27 np0005531754 kernel: Movable zone start for each node
Nov 21 23:42:27 np0005531754 kernel: Early memory node ranges
Nov 21 23:42:27 np0005531754 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 21 23:42:27 np0005531754 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 21 23:42:27 np0005531754 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 21 23:42:27 np0005531754 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 21 23:42:27 np0005531754 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 21 23:42:27 np0005531754 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 21 23:42:27 np0005531754 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 21 23:42:27 np0005531754 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 21 23:42:27 np0005531754 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 21 23:42:27 np0005531754 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 21 23:42:27 np0005531754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 21 23:42:27 np0005531754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 21 23:42:27 np0005531754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 21 23:42:27 np0005531754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 21 23:42:27 np0005531754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 21 23:42:27 np0005531754 kernel: TSC deadline timer available
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Max. logical packages:   8
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Max. logical dies:       8
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Max. dies per package:   1
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Max. threads per core:   1
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Num. cores per package:     1
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Num. threads per package:   1
Nov 21 23:42:27 np0005531754 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 21 23:42:27 np0005531754 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 21 23:42:27 np0005531754 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 21 23:42:27 np0005531754 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 21 23:42:27 np0005531754 kernel: Booting paravirtualized kernel on KVM
Nov 21 23:42:27 np0005531754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 21 23:42:27 np0005531754 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 21 23:42:27 np0005531754 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 21 23:42:27 np0005531754 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 21 23:42:27 np0005531754 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 21 23:42:27 np0005531754 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 21 23:42:27 np0005531754 kernel: random: crng init done
Nov 21 23:42:27 np0005531754 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: Fallback order for Node 0: 0 
Nov 21 23:42:27 np0005531754 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 21 23:42:27 np0005531754 kernel: Policy zone: Normal
Nov 21 23:42:27 np0005531754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 21 23:42:27 np0005531754 kernel: software IO TLB: area num 8.
Nov 21 23:42:27 np0005531754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 21 23:42:27 np0005531754 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 21 23:42:27 np0005531754 kernel: ftrace: allocated 193 pages with 3 groups
Nov 21 23:42:27 np0005531754 kernel: Dynamic Preempt: voluntary
Nov 21 23:42:27 np0005531754 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 21 23:42:27 np0005531754 kernel: rcu: #011RCU event tracing is enabled.
Nov 21 23:42:27 np0005531754 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 21 23:42:27 np0005531754 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 21 23:42:27 np0005531754 kernel: #011Rude variant of Tasks RCU enabled.
Nov 21 23:42:27 np0005531754 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 21 23:42:27 np0005531754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 21 23:42:27 np0005531754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 21 23:42:27 np0005531754 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 21 23:42:27 np0005531754 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 21 23:42:27 np0005531754 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 21 23:42:27 np0005531754 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 21 23:42:27 np0005531754 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 21 23:42:27 np0005531754 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 21 23:42:27 np0005531754 kernel: Console: colour VGA+ 80x25
Nov 21 23:42:27 np0005531754 kernel: printk: console [ttyS0] enabled
Nov 21 23:42:27 np0005531754 kernel: ACPI: Core revision 20230331
Nov 21 23:42:27 np0005531754 kernel: APIC: Switch to symmetric I/O mode setup
Nov 21 23:42:27 np0005531754 kernel: x2apic enabled
Nov 21 23:42:27 np0005531754 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 21 23:42:27 np0005531754 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 21 23:42:27 np0005531754 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 21 23:42:27 np0005531754 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 21 23:42:27 np0005531754 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 21 23:42:27 np0005531754 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 21 23:42:27 np0005531754 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 21 23:42:27 np0005531754 kernel: Spectre V2 : Mitigation: Retpolines
Nov 21 23:42:27 np0005531754 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 21 23:42:27 np0005531754 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 21 23:42:27 np0005531754 kernel: RETBleed: Mitigation: untrained return thunk
Nov 21 23:42:27 np0005531754 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 21 23:42:27 np0005531754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 21 23:42:27 np0005531754 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 21 23:42:27 np0005531754 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 21 23:42:27 np0005531754 kernel: x86/bugs: return thunk changed
Nov 21 23:42:27 np0005531754 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 21 23:42:27 np0005531754 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 21 23:42:27 np0005531754 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 21 23:42:27 np0005531754 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 21 23:42:27 np0005531754 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 21 23:42:27 np0005531754 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 21 23:42:27 np0005531754 kernel: Freeing SMP alternatives memory: 40K
Nov 21 23:42:27 np0005531754 kernel: pid_max: default: 32768 minimum: 301
Nov 21 23:42:27 np0005531754 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 21 23:42:27 np0005531754 kernel: landlock: Up and running.
Nov 21 23:42:27 np0005531754 kernel: Yama: becoming mindful.
Nov 21 23:42:27 np0005531754 kernel: SELinux:  Initializing.
Nov 21 23:42:27 np0005531754 kernel: LSM support for eBPF active
Nov 21 23:42:27 np0005531754 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 21 23:42:27 np0005531754 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 21 23:42:27 np0005531754 kernel: ... version:                0
Nov 21 23:42:27 np0005531754 kernel: ... bit width:              48
Nov 21 23:42:27 np0005531754 kernel: ... generic registers:      6
Nov 21 23:42:27 np0005531754 kernel: ... value mask:             0000ffffffffffff
Nov 21 23:42:27 np0005531754 kernel: ... max period:             00007fffffffffff
Nov 21 23:42:27 np0005531754 kernel: ... fixed-purpose events:   0
Nov 21 23:42:27 np0005531754 kernel: ... event mask:             000000000000003f
Nov 21 23:42:27 np0005531754 kernel: signal: max sigframe size: 1776
Nov 21 23:42:27 np0005531754 kernel: rcu: Hierarchical SRCU implementation.
Nov 21 23:42:27 np0005531754 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 21 23:42:27 np0005531754 kernel: smp: Bringing up secondary CPUs ...
Nov 21 23:42:27 np0005531754 kernel: smpboot: x86: Booting SMP configuration:
Nov 21 23:42:27 np0005531754 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 21 23:42:27 np0005531754 kernel: smp: Brought up 1 node, 8 CPUs
Nov 21 23:42:27 np0005531754 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 21 23:42:27 np0005531754 kernel: node 0 deferred pages initialised in 9ms
Nov 21 23:42:27 np0005531754 kernel: Memory: 7765988K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 21 23:42:27 np0005531754 kernel: devtmpfs: initialized
Nov 21 23:42:27 np0005531754 kernel: x86/mm: Memory block size: 128MB
Nov 21 23:42:27 np0005531754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 21 23:42:27 np0005531754 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: pinctrl core: initialized pinctrl subsystem
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 21 23:42:27 np0005531754 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 21 23:42:27 np0005531754 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 21 23:42:27 np0005531754 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 21 23:42:27 np0005531754 kernel: audit: initializing netlink subsys (disabled)
Nov 21 23:42:27 np0005531754 kernel: audit: type=2000 audit(1763786546.133:1): state=initialized audit_enabled=0 res=1
Nov 21 23:42:27 np0005531754 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 21 23:42:27 np0005531754 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 21 23:42:27 np0005531754 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 21 23:42:27 np0005531754 kernel: cpuidle: using governor menu
Nov 21 23:42:27 np0005531754 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 21 23:42:27 np0005531754 kernel: PCI: Using configuration type 1 for base access
Nov 21 23:42:27 np0005531754 kernel: PCI: Using configuration type 1 for extended access
Nov 21 23:42:27 np0005531754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 21 23:42:27 np0005531754 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 21 23:42:27 np0005531754 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 21 23:42:27 np0005531754 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 21 23:42:27 np0005531754 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 21 23:42:27 np0005531754 kernel: Demotion targets for Node 0: null
Nov 21 23:42:27 np0005531754 kernel: cryptd: max_cpu_qlen set to 1000
Nov 21 23:42:27 np0005531754 kernel: ACPI: Added _OSI(Module Device)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Added _OSI(Processor Device)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 21 23:42:27 np0005531754 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 21 23:42:27 np0005531754 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 21 23:42:27 np0005531754 kernel: ACPI: Interpreter enabled
Nov 21 23:42:27 np0005531754 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 21 23:42:27 np0005531754 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 21 23:42:27 np0005531754 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 21 23:42:27 np0005531754 kernel: PCI: Using E820 reservations for host bridge windows
Nov 21 23:42:27 np0005531754 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 21 23:42:27 np0005531754 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [3] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [4] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [5] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [6] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [7] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [8] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [9] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [10] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [11] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [12] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [13] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [14] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [15] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [16] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [17] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [18] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [19] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [20] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [21] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [22] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [23] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [24] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [25] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [26] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [27] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [28] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [29] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [30] registered
Nov 21 23:42:27 np0005531754 kernel: acpiphp: Slot [31] registered
Nov 21 23:42:27 np0005531754 kernel: PCI host bridge to bus 0000:00
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 21 23:42:27 np0005531754 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 21 23:42:27 np0005531754 kernel: iommu: Default domain type: Translated
Nov 21 23:42:27 np0005531754 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 21 23:42:27 np0005531754 kernel: SCSI subsystem initialized
Nov 21 23:42:27 np0005531754 kernel: ACPI: bus type USB registered
Nov 21 23:42:27 np0005531754 kernel: usbcore: registered new interface driver usbfs
Nov 21 23:42:27 np0005531754 kernel: usbcore: registered new interface driver hub
Nov 21 23:42:27 np0005531754 kernel: usbcore: registered new device driver usb
Nov 21 23:42:27 np0005531754 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 21 23:42:27 np0005531754 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 21 23:42:27 np0005531754 kernel: PTP clock support registered
Nov 21 23:42:27 np0005531754 kernel: EDAC MC: Ver: 3.0.0
Nov 21 23:42:27 np0005531754 kernel: NetLabel: Initializing
Nov 21 23:42:27 np0005531754 kernel: NetLabel:  domain hash size = 128
Nov 21 23:42:27 np0005531754 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 21 23:42:27 np0005531754 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 21 23:42:27 np0005531754 kernel: PCI: Using ACPI for IRQ routing
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 21 23:42:27 np0005531754 kernel: vgaarb: loaded
Nov 21 23:42:27 np0005531754 kernel: clocksource: Switched to clocksource kvm-clock
Nov 21 23:42:27 np0005531754 kernel: VFS: Disk quotas dquot_6.6.0
Nov 21 23:42:27 np0005531754 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 21 23:42:27 np0005531754 kernel: pnp: PnP ACPI init
Nov 21 23:42:27 np0005531754 kernel: pnp: PnP ACPI: found 5 devices
Nov 21 23:42:27 np0005531754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_INET protocol family
Nov 21 23:42:27 np0005531754 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 21 23:42:27 np0005531754 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_XDP protocol family
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 21 23:42:27 np0005531754 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 21 23:42:27 np0005531754 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 21 23:42:27 np0005531754 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 90944 usecs
Nov 21 23:42:27 np0005531754 kernel: PCI: CLS 0 bytes, default 64
Nov 21 23:42:27 np0005531754 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 21 23:42:27 np0005531754 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 21 23:42:27 np0005531754 kernel: ACPI: bus type thunderbolt registered
Nov 21 23:42:27 np0005531754 kernel: Trying to unpack rootfs image as initramfs...
Nov 21 23:42:27 np0005531754 kernel: Initialise system trusted keyrings
Nov 21 23:42:27 np0005531754 kernel: Key type blacklist registered
Nov 21 23:42:27 np0005531754 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 21 23:42:27 np0005531754 kernel: zbud: loaded
Nov 21 23:42:27 np0005531754 kernel: integrity: Platform Keyring initialized
Nov 21 23:42:27 np0005531754 kernel: integrity: Machine keyring initialized
Nov 21 23:42:27 np0005531754 kernel: Freeing initrd memory: 85868K
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_ALG protocol family
Nov 21 23:42:27 np0005531754 kernel: xor: automatically using best checksumming function   avx       
Nov 21 23:42:27 np0005531754 kernel: Key type asymmetric registered
Nov 21 23:42:27 np0005531754 kernel: Asymmetric key parser 'x509' registered
Nov 21 23:42:27 np0005531754 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 21 23:42:27 np0005531754 kernel: io scheduler mq-deadline registered
Nov 21 23:42:27 np0005531754 kernel: io scheduler kyber registered
Nov 21 23:42:27 np0005531754 kernel: io scheduler bfq registered
Nov 21 23:42:27 np0005531754 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 21 23:42:27 np0005531754 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 21 23:42:27 np0005531754 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 21 23:42:27 np0005531754 kernel: ACPI: button: Power Button [PWRF]
Nov 21 23:42:27 np0005531754 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 21 23:42:27 np0005531754 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 21 23:42:27 np0005531754 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 21 23:42:27 np0005531754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 21 23:42:27 np0005531754 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 21 23:42:27 np0005531754 kernel: Non-volatile memory driver v1.3
Nov 21 23:42:27 np0005531754 kernel: rdac: device handler registered
Nov 21 23:42:27 np0005531754 kernel: hp_sw: device handler registered
Nov 21 23:42:27 np0005531754 kernel: emc: device handler registered
Nov 21 23:42:27 np0005531754 kernel: alua: device handler registered
Nov 21 23:42:27 np0005531754 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 21 23:42:27 np0005531754 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 21 23:42:27 np0005531754 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 21 23:42:27 np0005531754 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 21 23:42:27 np0005531754 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 21 23:42:27 np0005531754 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 21 23:42:27 np0005531754 kernel: usb usb1: Product: UHCI Host Controller
Nov 21 23:42:27 np0005531754 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 21 23:42:27 np0005531754 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 21 23:42:27 np0005531754 kernel: hub 1-0:1.0: USB hub found
Nov 21 23:42:27 np0005531754 kernel: hub 1-0:1.0: 2 ports detected
Nov 21 23:42:27 np0005531754 kernel: usbcore: registered new interface driver usbserial_generic
Nov 21 23:42:27 np0005531754 kernel: usbserial: USB Serial support registered for generic
Nov 21 23:42:27 np0005531754 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 21 23:42:27 np0005531754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 21 23:42:27 np0005531754 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 21 23:42:27 np0005531754 kernel: mousedev: PS/2 mouse device common for all mice
Nov 21 23:42:27 np0005531754 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 21 23:42:27 np0005531754 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 21 23:42:27 np0005531754 kernel: rtc_cmos 00:04: registered as rtc0
Nov 21 23:42:27 np0005531754 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 21 23:42:27 np0005531754 kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T04:42:26 UTC (1763786546)
Nov 21 23:42:27 np0005531754 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 21 23:42:27 np0005531754 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 21 23:42:27 np0005531754 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 21 23:42:27 np0005531754 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 21 23:42:27 np0005531754 kernel: usbcore: registered new interface driver usbhid
Nov 21 23:42:27 np0005531754 kernel: usbhid: USB HID core driver
Nov 21 23:42:27 np0005531754 kernel: drop_monitor: Initializing network drop monitor service
Nov 21 23:42:27 np0005531754 kernel: Initializing XFRM netlink socket
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_INET6 protocol family
Nov 21 23:42:27 np0005531754 kernel: Segment Routing with IPv6
Nov 21 23:42:27 np0005531754 kernel: NET: Registered PF_PACKET protocol family
Nov 21 23:42:27 np0005531754 kernel: mpls_gso: MPLS GSO support
Nov 21 23:42:27 np0005531754 kernel: IPI shorthand broadcast: enabled
Nov 21 23:42:27 np0005531754 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 21 23:42:27 np0005531754 kernel: AES CTR mode by8 optimization enabled
Nov 21 23:42:27 np0005531754 kernel: sched_clock: Marking stable (1241001383, 150433338)->(1502009563, -110574842)
Nov 21 23:42:27 np0005531754 kernel: registered taskstats version 1
Nov 21 23:42:27 np0005531754 kernel: Loading compiled-in X.509 certificates
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 21 23:42:27 np0005531754 kernel: Demotion targets for Node 0: null
Nov 21 23:42:27 np0005531754 kernel: page_owner is disabled
Nov 21 23:42:27 np0005531754 kernel: Key type .fscrypt registered
Nov 21 23:42:27 np0005531754 kernel: Key type fscrypt-provisioning registered
Nov 21 23:42:27 np0005531754 kernel: Key type big_key registered
Nov 21 23:42:27 np0005531754 kernel: Key type encrypted registered
Nov 21 23:42:27 np0005531754 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 21 23:42:27 np0005531754 kernel: Loading compiled-in module X.509 certificates
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 21 23:42:27 np0005531754 kernel: ima: Allocated hash algorithm: sha256
Nov 21 23:42:27 np0005531754 kernel: ima: No architecture policies found
Nov 21 23:42:27 np0005531754 kernel: evm: Initialising EVM extended attributes:
Nov 21 23:42:27 np0005531754 kernel: evm: security.selinux
Nov 21 23:42:27 np0005531754 kernel: evm: security.SMACK64 (disabled)
Nov 21 23:42:27 np0005531754 kernel: evm: security.SMACK64EXEC (disabled)
Nov 21 23:42:27 np0005531754 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 21 23:42:27 np0005531754 kernel: evm: security.SMACK64MMAP (disabled)
Nov 21 23:42:27 np0005531754 kernel: evm: security.apparmor (disabled)
Nov 21 23:42:27 np0005531754 kernel: evm: security.ima
Nov 21 23:42:27 np0005531754 kernel: evm: security.capability
Nov 21 23:42:27 np0005531754 kernel: evm: HMAC attrs: 0x1
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 21 23:42:27 np0005531754 kernel: Running certificate verification RSA selftest
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 21 23:42:27 np0005531754 kernel: Running certificate verification ECDSA selftest
Nov 21 23:42:27 np0005531754 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 21 23:42:27 np0005531754 kernel: clk: Disabling unused clocks
Nov 21 23:42:27 np0005531754 kernel: Freeing unused decrypted memory: 2028K
Nov 21 23:42:27 np0005531754 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 21 23:42:27 np0005531754 kernel: Write protecting the kernel read-only data: 30720k
Nov 21 23:42:27 np0005531754 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: Manufacturer: QEMU
Nov 21 23:42:27 np0005531754 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 21 23:42:27 np0005531754 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 21 23:42:27 np0005531754 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 21 23:42:27 np0005531754 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 21 23:42:27 np0005531754 kernel: Run /init as init process
Nov 21 23:42:27 np0005531754 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 21 23:42:27 np0005531754 systemd: Detected virtualization kvm.
Nov 21 23:42:27 np0005531754 systemd: Detected architecture x86-64.
Nov 21 23:42:27 np0005531754 systemd: Running in initrd.
Nov 21 23:42:27 np0005531754 systemd: No hostname configured, using default hostname.
Nov 21 23:42:27 np0005531754 systemd: Hostname set to <localhost>.
Nov 21 23:42:27 np0005531754 systemd: Initializing machine ID from VM UUID.
Nov 21 23:42:27 np0005531754 systemd: Queued start job for default target Initrd Default Target.
Nov 21 23:42:27 np0005531754 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 21 23:42:27 np0005531754 systemd: Reached target Local Encrypted Volumes.
Nov 21 23:42:27 np0005531754 systemd: Reached target Initrd /usr File System.
Nov 21 23:42:27 np0005531754 systemd: Reached target Local File Systems.
Nov 21 23:42:27 np0005531754 systemd: Reached target Path Units.
Nov 21 23:42:27 np0005531754 systemd: Reached target Slice Units.
Nov 21 23:42:27 np0005531754 systemd: Reached target Swaps.
Nov 21 23:42:27 np0005531754 systemd: Reached target Timer Units.
Nov 21 23:42:27 np0005531754 systemd: Listening on D-Bus System Message Bus Socket.
Nov 21 23:42:27 np0005531754 systemd: Listening on Journal Socket (/dev/log).
Nov 21 23:42:27 np0005531754 systemd: Listening on Journal Socket.
Nov 21 23:42:27 np0005531754 systemd: Listening on udev Control Socket.
Nov 21 23:42:27 np0005531754 systemd: Listening on udev Kernel Socket.
Nov 21 23:42:27 np0005531754 systemd: Reached target Socket Units.
Nov 21 23:42:27 np0005531754 systemd: Starting Create List of Static Device Nodes...
Nov 21 23:42:27 np0005531754 systemd: Starting Journal Service...
Nov 21 23:42:27 np0005531754 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 21 23:42:27 np0005531754 systemd: Starting Apply Kernel Variables...
Nov 21 23:42:27 np0005531754 systemd: Starting Create System Users...
Nov 21 23:42:27 np0005531754 systemd: Starting Setup Virtual Console...
Nov 21 23:42:27 np0005531754 systemd: Finished Create List of Static Device Nodes.
Nov 21 23:42:27 np0005531754 systemd: Finished Apply Kernel Variables.
Nov 21 23:42:27 np0005531754 systemd: Finished Create System Users.
Nov 21 23:42:27 np0005531754 systemd-journald[309]: Journal started
Nov 21 23:42:27 np0005531754 systemd-journald[309]: Runtime Journal (/run/log/journal/66851c39840f46c8adfc77dc6a7d91a4) is 8.0M, max 153.6M, 145.6M free.
Nov 21 23:42:27 np0005531754 systemd-sysusers[313]: Creating group 'users' with GID 100.
Nov 21 23:42:27 np0005531754 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Nov 21 23:42:27 np0005531754 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 21 23:42:27 np0005531754 systemd: Started Journal Service.
Nov 21 23:42:27 np0005531754 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 21 23:42:27 np0005531754 systemd[1]: Starting Create Volatile Files and Directories...
Nov 21 23:42:27 np0005531754 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 21 23:42:27 np0005531754 systemd[1]: Finished Create Volatile Files and Directories.
Nov 21 23:42:27 np0005531754 systemd[1]: Finished Setup Virtual Console.
Nov 21 23:42:27 np0005531754 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 21 23:42:27 np0005531754 systemd[1]: Starting dracut cmdline hook...
Nov 21 23:42:27 np0005531754 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 21 23:42:27 np0005531754 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 21 23:42:27 np0005531754 systemd[1]: Finished dracut cmdline hook.
Nov 21 23:42:27 np0005531754 systemd[1]: Starting dracut pre-udev hook...
Nov 21 23:42:27 np0005531754 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 21 23:42:27 np0005531754 kernel: device-mapper: uevent: version 1.0.3
Nov 21 23:42:27 np0005531754 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 21 23:42:27 np0005531754 kernel: RPC: Registered named UNIX socket transport module.
Nov 21 23:42:27 np0005531754 kernel: RPC: Registered udp transport module.
Nov 21 23:42:27 np0005531754 kernel: RPC: Registered tcp transport module.
Nov 21 23:42:27 np0005531754 kernel: RPC: Registered tcp-with-tls transport module.
Nov 21 23:42:27 np0005531754 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 21 23:42:27 np0005531754 rpc.statd[445]: Version 2.5.4 starting
Nov 21 23:42:27 np0005531754 rpc.statd[445]: Initializing NSM state
Nov 21 23:42:27 np0005531754 rpc.idmapd[450]: Setting log level to 0
Nov 21 23:42:27 np0005531754 systemd[1]: Finished dracut pre-udev hook.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 21 23:42:28 np0005531754 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Nov 21 23:42:28 np0005531754 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting dracut pre-trigger hook...
Nov 21 23:42:28 np0005531754 systemd[1]: Finished dracut pre-trigger hook.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting Coldplug All udev Devices...
Nov 21 23:42:28 np0005531754 systemd[1]: Created slice Slice /system/modprobe.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting Load Kernel Module configfs...
Nov 21 23:42:28 np0005531754 systemd[1]: Finished Coldplug All udev Devices.
Nov 21 23:42:28 np0005531754 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 21 23:42:28 np0005531754 systemd[1]: Finished Load Kernel Module configfs.
Nov 21 23:42:28 np0005531754 systemd[1]: Mounting Kernel Configuration File System...
Nov 21 23:42:28 np0005531754 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Network.
Nov 21 23:42:28 np0005531754 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 21 23:42:28 np0005531754 systemd[1]: Starting dracut initqueue hook...
Nov 21 23:42:28 np0005531754 systemd[1]: Mounted Kernel Configuration File System.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target System Initialization.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Basic System.
Nov 21 23:42:28 np0005531754 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 21 23:42:28 np0005531754 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 21 23:42:28 np0005531754 kernel: vda: vda1
Nov 21 23:42:28 np0005531754 kernel: scsi host0: ata_piix
Nov 21 23:42:28 np0005531754 kernel: scsi host1: ata_piix
Nov 21 23:42:28 np0005531754 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 21 23:42:28 np0005531754 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 21 23:42:28 np0005531754 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Nov 21 23:42:28 np0005531754 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Initrd Root Device.
Nov 21 23:42:28 np0005531754 kernel: ata1: found unknown device (class 0)
Nov 21 23:42:28 np0005531754 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 21 23:42:28 np0005531754 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 21 23:42:28 np0005531754 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 21 23:42:28 np0005531754 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 21 23:42:28 np0005531754 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 21 23:42:28 np0005531754 systemd[1]: Finished dracut initqueue hook.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 21 23:42:28 np0005531754 systemd[1]: Reached target Remote File Systems.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting dracut pre-mount hook...
Nov 21 23:42:28 np0005531754 systemd[1]: Finished dracut pre-mount hook.
Nov 21 23:42:28 np0005531754 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 21 23:42:28 np0005531754 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 21 23:42:28 np0005531754 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 21 23:42:28 np0005531754 systemd[1]: Mounting /sysroot...
Nov 21 23:42:29 np0005531754 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 21 23:42:29 np0005531754 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 21 23:42:29 np0005531754 kernel: XFS (vda1): Ending clean mount
Nov 21 23:42:29 np0005531754 systemd[1]: Mounted /sysroot.
Nov 21 23:42:29 np0005531754 systemd[1]: Reached target Initrd Root File System.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 21 23:42:29 np0005531754 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 21 23:42:29 np0005531754 systemd[1]: Reached target Initrd File Systems.
Nov 21 23:42:29 np0005531754 systemd[1]: Reached target Initrd Default Target.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting dracut mount hook...
Nov 21 23:42:29 np0005531754 systemd[1]: Finished dracut mount hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 21 23:42:29 np0005531754 rpc.idmapd[450]: exiting on signal 15
Nov 21 23:42:29 np0005531754 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Network.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Timer Units.
Nov 21 23:42:29 np0005531754 systemd[1]: dbus.socket: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Initrd Default Target.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Basic System.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Initrd Root Device.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Initrd /usr File System.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Path Units.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Remote File Systems.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Slice Units.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Socket Units.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target System Initialization.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Local File Systems.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Swaps.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut mount hook.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut pre-mount hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut initqueue hook.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Apply Kernel Variables.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Coldplug All udev Devices.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut pre-trigger hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Setup Virtual Console.
Nov 21 23:42:29 np0005531754 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Closed udev Control Socket.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Closed udev Kernel Socket.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut pre-udev hook.
Nov 21 23:42:29 np0005531754 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped dracut cmdline hook.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting Cleanup udev Database...
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 21 23:42:29 np0005531754 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 21 23:42:29 np0005531754 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Stopped Create System Users.
Nov 21 23:42:29 np0005531754 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 21 23:42:29 np0005531754 systemd[1]: Finished Cleanup udev Database.
Nov 21 23:42:29 np0005531754 systemd[1]: Reached target Switch Root.
Nov 21 23:42:29 np0005531754 systemd[1]: Starting Switch Root...
Nov 21 23:42:29 np0005531754 systemd[1]: Switching root.
Nov 21 23:42:29 np0005531754 systemd-journald[309]: Journal stopped
Nov 21 23:42:30 np0005531754 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 21 23:42:30 np0005531754 kernel: audit: type=1404 audit(1763786549.935:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 21 23:42:30 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 21 23:42:30 np0005531754 kernel: audit: type=1403 audit(1763786550.117:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 21 23:42:30 np0005531754 systemd: Successfully loaded SELinux policy in 189.202ms.
Nov 21 23:42:30 np0005531754 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.873ms.
Nov 21 23:42:30 np0005531754 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 21 23:42:30 np0005531754 systemd: Detected virtualization kvm.
Nov 21 23:42:30 np0005531754 systemd: Detected architecture x86-64.
Nov 21 23:42:30 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 21 23:42:30 np0005531754 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd: Stopped Switch Root.
Nov 21 23:42:30 np0005531754 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 21 23:42:30 np0005531754 systemd: Created slice Slice /system/getty.
Nov 21 23:42:30 np0005531754 systemd: Created slice Slice /system/serial-getty.
Nov 21 23:42:30 np0005531754 systemd: Created slice Slice /system/sshd-keygen.
Nov 21 23:42:30 np0005531754 systemd: Created slice User and Session Slice.
Nov 21 23:42:30 np0005531754 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 21 23:42:30 np0005531754 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 21 23:42:30 np0005531754 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 21 23:42:30 np0005531754 systemd: Reached target Local Encrypted Volumes.
Nov 21 23:42:30 np0005531754 systemd: Stopped target Switch Root.
Nov 21 23:42:30 np0005531754 systemd: Stopped target Initrd File Systems.
Nov 21 23:42:30 np0005531754 systemd: Stopped target Initrd Root File System.
Nov 21 23:42:30 np0005531754 systemd: Reached target Local Integrity Protected Volumes.
Nov 21 23:42:30 np0005531754 systemd: Reached target Path Units.
Nov 21 23:42:30 np0005531754 systemd: Reached target rpc_pipefs.target.
Nov 21 23:42:30 np0005531754 systemd: Reached target Slice Units.
Nov 21 23:42:30 np0005531754 systemd: Reached target Swaps.
Nov 21 23:42:30 np0005531754 systemd: Reached target Local Verity Protected Volumes.
Nov 21 23:42:30 np0005531754 systemd: Listening on RPCbind Server Activation Socket.
Nov 21 23:42:30 np0005531754 systemd: Reached target RPC Port Mapper.
Nov 21 23:42:30 np0005531754 systemd: Listening on Process Core Dump Socket.
Nov 21 23:42:30 np0005531754 systemd: Listening on initctl Compatibility Named Pipe.
Nov 21 23:42:30 np0005531754 systemd: Listening on udev Control Socket.
Nov 21 23:42:30 np0005531754 systemd: Listening on udev Kernel Socket.
Nov 21 23:42:30 np0005531754 systemd: Mounting Huge Pages File System...
Nov 21 23:42:30 np0005531754 systemd: Mounting POSIX Message Queue File System...
Nov 21 23:42:30 np0005531754 systemd: Mounting Kernel Debug File System...
Nov 21 23:42:30 np0005531754 systemd: Mounting Kernel Trace File System...
Nov 21 23:42:30 np0005531754 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 21 23:42:30 np0005531754 systemd: Starting Create List of Static Device Nodes...
Nov 21 23:42:30 np0005531754 systemd: Starting Load Kernel Module configfs...
Nov 21 23:42:30 np0005531754 systemd: Starting Load Kernel Module drm...
Nov 21 23:42:30 np0005531754 systemd: Starting Load Kernel Module efi_pstore...
Nov 21 23:42:30 np0005531754 systemd: Starting Load Kernel Module fuse...
Nov 21 23:42:30 np0005531754 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 21 23:42:30 np0005531754 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd: Stopped File System Check on Root Device.
Nov 21 23:42:30 np0005531754 systemd: Stopped Journal Service.
Nov 21 23:42:30 np0005531754 kernel: fuse: init (API version 7.37)
Nov 21 23:42:30 np0005531754 systemd: Starting Journal Service...
Nov 21 23:42:30 np0005531754 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 21 23:42:30 np0005531754 systemd: Starting Generate network units from Kernel command line...
Nov 21 23:42:30 np0005531754 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 21 23:42:30 np0005531754 systemd: Starting Remount Root and Kernel File Systems...
Nov 21 23:42:30 np0005531754 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 21 23:42:30 np0005531754 systemd: Starting Apply Kernel Variables...
Nov 21 23:42:30 np0005531754 systemd: Starting Coldplug All udev Devices...
Nov 21 23:42:30 np0005531754 systemd: Mounted Huge Pages File System.
Nov 21 23:42:30 np0005531754 systemd-journald[680]: Journal started
Nov 21 23:42:30 np0005531754 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 21 23:42:30 np0005531754 systemd[1]: Queued start job for default target Multi-User System.
Nov 21 23:42:30 np0005531754 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd: Started Journal Service.
Nov 21 23:42:30 np0005531754 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 21 23:42:30 np0005531754 systemd[1]: Mounted POSIX Message Queue File System.
Nov 21 23:42:30 np0005531754 systemd[1]: Mounted Kernel Debug File System.
Nov 21 23:42:30 np0005531754 systemd[1]: Mounted Kernel Trace File System.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Create List of Static Device Nodes.
Nov 21 23:42:30 np0005531754 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Load Kernel Module configfs.
Nov 21 23:42:30 np0005531754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 21 23:42:30 np0005531754 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Load Kernel Module fuse.
Nov 21 23:42:30 np0005531754 kernel: ACPI: bus type drm_connector registered
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 21 23:42:30 np0005531754 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Load Kernel Module drm.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Generate network units from Kernel command line.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Apply Kernel Variables.
Nov 21 23:42:30 np0005531754 systemd[1]: Mounting FUSE Control File System...
Nov 21 23:42:30 np0005531754 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Rebuild Hardware Database...
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 21 23:42:30 np0005531754 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Load/Save OS Random Seed...
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Create System Users...
Nov 21 23:42:30 np0005531754 systemd[1]: Mounted FUSE Control File System.
Nov 21 23:42:30 np0005531754 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 21 23:42:30 np0005531754 systemd-journald[680]: Received client request to flush runtime journal.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Load/Save OS Random Seed.
Nov 21 23:42:30 np0005531754 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Coldplug All udev Devices.
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Create System Users.
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 21 23:42:30 np0005531754 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 21 23:42:30 np0005531754 systemd[1]: Reached target Preparation for Local File Systems.
Nov 21 23:42:30 np0005531754 systemd[1]: Reached target Local File Systems.
Nov 21 23:42:30 np0005531754 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 21 23:42:31 np0005531754 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 21 23:42:31 np0005531754 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 21 23:42:31 np0005531754 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Automatic Boot Loader Update...
Nov 21 23:42:31 np0005531754 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Create Volatile Files and Directories...
Nov 21 23:42:31 np0005531754 bootctl[698]: Couldn't find EFI system partition, skipping.
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Automatic Boot Loader Update.
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Create Volatile Files and Directories.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Security Auditing Service...
Nov 21 23:42:31 np0005531754 systemd[1]: Starting RPC Bind...
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Rebuild Journal Catalog...
Nov 21 23:42:31 np0005531754 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 21 23:42:31 np0005531754 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Rebuild Journal Catalog.
Nov 21 23:42:31 np0005531754 systemd[1]: Started RPC Bind.
Nov 21 23:42:31 np0005531754 augenrules[709]: /sbin/augenrules: No change
Nov 21 23:42:31 np0005531754 augenrules[724]: No rules
Nov 21 23:42:31 np0005531754 augenrules[724]: enabled 1
Nov 21 23:42:31 np0005531754 augenrules[724]: failure 1
Nov 21 23:42:31 np0005531754 augenrules[724]: pid 704
Nov 21 23:42:31 np0005531754 augenrules[724]: rate_limit 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_limit 8192
Nov 21 23:42:31 np0005531754 augenrules[724]: lost 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog 1
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time 60000
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time_actual 0
Nov 21 23:42:31 np0005531754 augenrules[724]: enabled 1
Nov 21 23:42:31 np0005531754 augenrules[724]: failure 1
Nov 21 23:42:31 np0005531754 augenrules[724]: pid 704
Nov 21 23:42:31 np0005531754 augenrules[724]: rate_limit 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_limit 8192
Nov 21 23:42:31 np0005531754 augenrules[724]: lost 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time 60000
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time_actual 0
Nov 21 23:42:31 np0005531754 augenrules[724]: enabled 1
Nov 21 23:42:31 np0005531754 augenrules[724]: failure 1
Nov 21 23:42:31 np0005531754 augenrules[724]: pid 704
Nov 21 23:42:31 np0005531754 augenrules[724]: rate_limit 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_limit 8192
Nov 21 23:42:31 np0005531754 augenrules[724]: lost 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog 0
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time 60000
Nov 21 23:42:31 np0005531754 augenrules[724]: backlog_wait_time_actual 0
Nov 21 23:42:31 np0005531754 systemd[1]: Started Security Auditing Service.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Rebuild Hardware Database.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Update is Completed...
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Update is Completed.
Nov 21 23:42:31 np0005531754 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Nov 21 23:42:31 np0005531754 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target System Initialization.
Nov 21 23:42:31 np0005531754 systemd[1]: Started dnf makecache --timer.
Nov 21 23:42:31 np0005531754 systemd[1]: Started Daily rotation of log files.
Nov 21 23:42:31 np0005531754 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target Timer Units.
Nov 21 23:42:31 np0005531754 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 21 23:42:31 np0005531754 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target Socket Units.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting D-Bus System Message Bus...
Nov 21 23:42:31 np0005531754 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 21 23:42:31 np0005531754 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Load Kernel Module configfs...
Nov 21 23:42:31 np0005531754 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 21 23:42:31 np0005531754 systemd[1]: Finished Load Kernel Module configfs.
Nov 21 23:42:31 np0005531754 systemd[1]: Started D-Bus System Message Bus.
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target Basic System.
Nov 21 23:42:31 np0005531754 dbus-broker-lau[757]: Ready
Nov 21 23:42:31 np0005531754 systemd[1]: Starting NTP client/server...
Nov 21 23:42:31 np0005531754 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 21 23:42:31 np0005531754 systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Nov 21 23:42:31 np0005531754 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 21 23:42:31 np0005531754 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 21 23:42:31 np0005531754 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 21 23:42:31 np0005531754 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 21 23:42:31 np0005531754 chronyd[781]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 21 23:42:31 np0005531754 chronyd[781]: Loaded 0 symmetric keys
Nov 21 23:42:31 np0005531754 chronyd[781]: Using right/UTC timezone to obtain leap second data
Nov 21 23:42:31 np0005531754 chronyd[781]: Loaded seccomp filter (level 2)
Nov 21 23:42:31 np0005531754 systemd[1]: Starting IPv4 firewall with iptables...
Nov 21 23:42:31 np0005531754 systemd[1]: Started irqbalance daemon.
Nov 21 23:42:31 np0005531754 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 21 23:42:31 np0005531754 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 21 23:42:31 np0005531754 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 21 23:42:31 np0005531754 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target sshd-keygen.target.
Nov 21 23:42:31 np0005531754 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 21 23:42:31 np0005531754 systemd[1]: Reached target User and Group Name Lookups.
Nov 21 23:42:31 np0005531754 systemd[1]: Starting User Login Management...
Nov 21 23:42:31 np0005531754 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 21 23:42:31 np0005531754 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 21 23:42:31 np0005531754 kernel: kvm_amd: TSC scaling supported
Nov 21 23:42:31 np0005531754 kernel: kvm_amd: Nested Virtualization enabled
Nov 21 23:42:31 np0005531754 kernel: kvm_amd: Nested Paging enabled
Nov 21 23:42:31 np0005531754 kernel: kvm_amd: LBR virtualization supported
Nov 21 23:42:31 np0005531754 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 21 23:42:31 np0005531754 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 21 23:42:31 np0005531754 kernel: Console: switching to colour dummy device 80x25
Nov 21 23:42:31 np0005531754 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 21 23:42:31 np0005531754 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 21 23:42:31 np0005531754 kernel: [drm] features: -context_init
Nov 21 23:42:31 np0005531754 kernel: [drm] number of scanouts: 1
Nov 21 23:42:31 np0005531754 kernel: [drm] number of cap sets: 0
Nov 21 23:42:31 np0005531754 systemd-logind[798]: New seat seat0.
Nov 21 23:42:31 np0005531754 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 21 23:42:31 np0005531754 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 21 23:42:31 np0005531754 systemd[1]: Started User Login Management.
Nov 21 23:42:31 np0005531754 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 21 23:42:31 np0005531754 systemd[1]: Started NTP client/server.
Nov 21 23:42:31 np0005531754 kernel: Console: switching to colour frame buffer device 128x48
Nov 21 23:42:32 np0005531754 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 21 23:42:32 np0005531754 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 21 23:42:32 np0005531754 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Nov 21 23:42:32 np0005531754 systemd[1]: Finished IPv4 firewall with iptables.
Nov 21 23:42:32 np0005531754 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 22 Nov 2025 04:42:32 +0000. Up 7.20 seconds.
Nov 21 23:42:32 np0005531754 systemd[1]: run-cloud\x2dinit-tmp-tmpum_cq3dw.mount: Deactivated successfully.
Nov 21 23:42:32 np0005531754 systemd[1]: Starting Hostname Service...
Nov 21 23:42:32 np0005531754 systemd[1]: Started Hostname Service.
Nov 21 23:42:32 np0005531754 systemd-hostnamed[854]: Hostname set to <np0005531754.novalocal> (static)
Nov 21 23:42:33 np0005531754 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 21 23:42:33 np0005531754 systemd[1]: Reached target Preparation for Network.
Nov 21 23:42:33 np0005531754 systemd[1]: Starting Network Manager...
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2305] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2313] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2496] manager[0x56317bbd3080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2573] hostname: hostname: using hostnamed
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2573] hostname: static hostname changed from (none) to "np0005531754.novalocal"
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2581] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2753] manager[0x56317bbd3080]: rfkill: Wi-Fi hardware radio set enabled
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2754] manager[0x56317bbd3080]: rfkill: WWAN hardware radio set enabled
Nov 21 23:42:33 np0005531754 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2934] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2935] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2936] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2937] manager: Networking is enabled by state file
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2939] settings: Loaded settings plugin: keyfile (internal)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.2990] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3048] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3110] dhcp: init: Using DHCP client 'internal'
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3115] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3137] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3160] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3174] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 21 23:42:33 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3192] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3197] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 21 23:42:33 np0005531754 systemd[1]: Started Network Manager.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3243] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3254] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3258] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3261] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3264] device (eth0): carrier: link connected
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3271] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3281] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 21 23:42:33 np0005531754 systemd[1]: Reached target Network.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3291] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3297] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3298] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3301] manager: NetworkManager state is now CONNECTING
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3303] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3315] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3319] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:42:33 np0005531754 systemd[1]: Starting Network Manager Wait Online...
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3377] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3384] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3406] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 21 23:42:33 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3600] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3603] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3605] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3614] device (lo): Activation: successful, device activated.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3621] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3627] manager: NetworkManager state is now CONNECTED_SITE
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3630] device (eth0): Activation: successful, device activated.
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3639] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 21 23:42:33 np0005531754 NetworkManager[858]: <info>  [1763786553.3643] manager: startup complete
Nov 21 23:42:33 np0005531754 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 21 23:42:33 np0005531754 systemd[1]: Finished Network Manager Wait Online.
Nov 21 23:42:33 np0005531754 systemd[1]: Starting Cloud-init: Network Stage...
Nov 21 23:42:33 np0005531754 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 21 23:42:33 np0005531754 systemd[1]: Reached target NFS client services.
Nov 21 23:42:33 np0005531754 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 21 23:42:33 np0005531754 systemd[1]: Reached target Remote File Systems.
Nov 21 23:42:33 np0005531754 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 21 23:42:33 np0005531754 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 22 Nov 2025 04:42:33 +0000. Up 8.36 seconds.
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.23         | 255.255.255.0 | global | fa:16:3e:56:fc:55 |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe56:fc55/64 |       .       |  link  | fa:16:3e:56:fc:55 |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 21 23:42:33 np0005531754 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 21 23:42:35 np0005531754 cloud-init[921]: Generating public/private rsa key pair.
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key fingerprint is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: SHA256:Pd18xs2QwMu9vWq8bkW/9inXkdMB7tAJwH0P0TKKe1I root@np0005531754.novalocal
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key's randomart image is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: +---[RSA 3072]----+
Nov 21 23:42:35 np0005531754 cloud-init[921]: |        ..o...o  |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |         . o.B o |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |          ..*oX  |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |         o Eo*.B.|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |        S = + +o@|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |         o o ..B+|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |          o . . *|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |             = =o|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |            ++*.o|
Nov 21 23:42:35 np0005531754 cloud-init[921]: +----[SHA256]-----+
Nov 21 23:42:35 np0005531754 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key fingerprint is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: SHA256:ie4Hw3m1Ucaaoxq1WmD3VMkZ+EcQeoE4kuoXHJx46Nw root@np0005531754.novalocal
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key's randomart image is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: +---[ECDSA 256]---+
Nov 21 23:42:35 np0005531754 cloud-init[921]: |     + o . ==*   |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |    o B o o.O..  |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |   o = o ..B..   |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |    + E.o.B.. .  |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |   . o.*S* + .   |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |    ..B = o      |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |     ..O         |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |     .o .        |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |      ..         |
Nov 21 23:42:35 np0005531754 cloud-init[921]: +----[SHA256]-----+
Nov 21 23:42:35 np0005531754 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 21 23:42:35 np0005531754 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key fingerprint is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: SHA256:uRWf9TMjQNwacifyDPzhEUjlvB4XCAfporYhFomYSmU root@np0005531754.novalocal
Nov 21 23:42:35 np0005531754 cloud-init[921]: The key's randomart image is:
Nov 21 23:42:35 np0005531754 cloud-init[921]: +--[ED25519 256]--+
Nov 21 23:42:35 np0005531754 cloud-init[921]: |        o+*=o    |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |   E     B*B.o   |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |..+ .   . @=B..  |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |oo o   . o Ooo.. |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |o   . . S .ooo +.|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |.  o +   o. o . +|
Nov 21 23:42:35 np0005531754 cloud-init[921]: |  . o o .  .     |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |     .           |
Nov 21 23:42:35 np0005531754 cloud-init[921]: |                 |
Nov 21 23:42:35 np0005531754 cloud-init[921]: +----[SHA256]-----+
Nov 21 23:42:35 np0005531754 systemd[1]: Finished Cloud-init: Network Stage.
Nov 21 23:42:35 np0005531754 systemd[1]: Reached target Cloud-config availability.
Nov 21 23:42:35 np0005531754 systemd[1]: Reached target Network is Online.
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Cloud-init: Config Stage...
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Crash recovery kernel arming...
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Notify NFS peers of a restart...
Nov 21 23:42:35 np0005531754 systemd[1]: Starting System Logging Service...
Nov 21 23:42:35 np0005531754 systemd[1]: Starting OpenSSH server daemon...
Nov 21 23:42:35 np0005531754 sm-notify[1004]: Version 2.5.4 starting
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Permit User Sessions...
Nov 21 23:42:35 np0005531754 systemd[1]: Started Notify NFS peers of a restart.
Nov 21 23:42:35 np0005531754 systemd[1]: Finished Permit User Sessions.
Nov 21 23:42:35 np0005531754 systemd[1]: Started Command Scheduler.
Nov 21 23:42:35 np0005531754 systemd[1]: Started Getty on tty1.
Nov 21 23:42:35 np0005531754 systemd[1]: Started Serial Getty on ttyS0.
Nov 21 23:42:35 np0005531754 systemd[1]: Reached target Login Prompts.
Nov 21 23:42:35 np0005531754 systemd[1]: Started OpenSSH server daemon.
Nov 21 23:42:35 np0005531754 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 21 23:42:35 np0005531754 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 21 23:42:35 np0005531754 systemd[1]: Started System Logging Service.
Nov 21 23:42:35 np0005531754 systemd[1]: Reached target Multi-User System.
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 21 23:42:35 np0005531754 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 21 23:42:35 np0005531754 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 21 23:42:35 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 21 23:42:35 np0005531754 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Nov 21 23:42:35 np0005531754 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 21 23:42:35 np0005531754 cloud-init[1089]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 22 Nov 2025 04:42:35 +0000. Up 10.36 seconds.
Nov 21 23:42:35 np0005531754 systemd[1]: Finished Cloud-init: Config Stage.
Nov 21 23:42:35 np0005531754 systemd[1]: Starting Cloud-init: Final Stage...
Nov 21 23:42:36 np0005531754 cloud-init[1232]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 22 Nov 2025 04:42:36 +0000. Up 10.78 seconds.
Nov 21 23:42:36 np0005531754 cloud-init[1271]: #############################################################
Nov 21 23:42:36 np0005531754 cloud-init[1274]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 21 23:42:36 np0005531754 cloud-init[1280]: 256 SHA256:ie4Hw3m1Ucaaoxq1WmD3VMkZ+EcQeoE4kuoXHJx46Nw root@np0005531754.novalocal (ECDSA)
Nov 21 23:42:36 np0005531754 cloud-init[1286]: 256 SHA256:uRWf9TMjQNwacifyDPzhEUjlvB4XCAfporYhFomYSmU root@np0005531754.novalocal (ED25519)
Nov 21 23:42:36 np0005531754 dracut[1284]: dracut-057-102.git20250818.el9
Nov 21 23:42:36 np0005531754 cloud-init[1290]: 3072 SHA256:Pd18xs2QwMu9vWq8bkW/9inXkdMB7tAJwH0P0TKKe1I root@np0005531754.novalocal (RSA)
Nov 21 23:42:36 np0005531754 cloud-init[1292]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 21 23:42:36 np0005531754 cloud-init[1294]: #############################################################
Nov 21 23:42:36 np0005531754 cloud-init[1232]: Cloud-init v. 24.4-7.el9 finished at Sat, 22 Nov 2025 04:42:36 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.03 seconds
Nov 21 23:42:36 np0005531754 systemd[1]: Finished Cloud-init: Final Stage.
Nov 21 23:42:36 np0005531754 systemd[1]: Reached target Cloud-init target.
Nov 21 23:42:36 np0005531754 dracut[1291]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 21 23:42:37 np0005531754 dracut[1291]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: memstrack is not available
Nov 21 23:42:38 np0005531754 dracut[1291]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 21 23:42:38 np0005531754 dracut[1291]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 21 23:42:38 np0005531754 chronyd[781]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 21 23:42:39 np0005531754 chronyd[781]: System clock wrong by 1.240351 seconds
Nov 21 23:42:39 np0005531754 chronyd[781]: System clock was stepped by 1.240351 seconds
Nov 21 23:42:39 np0005531754 chronyd[781]: System clock TAI offset set to 37 seconds
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 21 23:42:39 np0005531754 dracut[1291]: memstrack is not available
Nov 21 23:42:39 np0005531754 dracut[1291]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 21 23:42:39 np0005531754 dracut[1291]: *** Including module: systemd ***
Nov 21 23:42:40 np0005531754 dracut[1291]: *** Including module: fips ***
Nov 21 23:42:40 np0005531754 dracut[1291]: *** Including module: systemd-initrd ***
Nov 21 23:42:40 np0005531754 dracut[1291]: *** Including module: i18n ***
Nov 21 23:42:40 np0005531754 dracut[1291]: *** Including module: drm ***
Nov 21 23:42:41 np0005531754 dracut[1291]: *** Including module: prefixdevname ***
Nov 21 23:42:41 np0005531754 dracut[1291]: *** Including module: kernel-modules ***
Nov 21 23:42:41 np0005531754 kernel: block vda: the capability attribute has been deprecated.
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: kernel-modules-extra ***
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: qemu ***
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: fstab-sys ***
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: rootfs-block ***
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: terminfo ***
Nov 21 23:42:42 np0005531754 dracut[1291]: *** Including module: udev-rules ***
Nov 21 23:42:43 np0005531754 dracut[1291]: Skipping udev rule: 91-permissions.rules
Nov 21 23:42:43 np0005531754 dracut[1291]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: virtiofs ***
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: dracut-systemd ***
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 25 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 31 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 28 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 32 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 30 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 21 23:42:43 np0005531754 irqbalance[791]: IRQ 29 affinity is now unmanaged
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: usrmount ***
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: base ***
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: fs-lib ***
Nov 21 23:42:43 np0005531754 dracut[1291]: *** Including module: kdumpbase ***
Nov 21 23:42:44 np0005531754 dracut[1291]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 21 23:42:44 np0005531754 dracut[1291]:  microcode_ctl module: mangling fw_dir
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 21 23:42:44 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 21 23:42:44 np0005531754 dracut[1291]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 21 23:42:44 np0005531754 dracut[1291]: *** Including module: openssl ***
Nov 21 23:42:44 np0005531754 dracut[1291]: *** Including module: shutdown ***
Nov 21 23:42:45 np0005531754 dracut[1291]: *** Including module: squash ***
Nov 21 23:42:45 np0005531754 dracut[1291]: *** Including modules done ***
Nov 21 23:42:45 np0005531754 dracut[1291]: *** Installing kernel module dependencies ***
Nov 21 23:42:46 np0005531754 dracut[1291]: *** Installing kernel module dependencies done ***
Nov 21 23:42:46 np0005531754 dracut[1291]: *** Resolving executable dependencies ***
Nov 21 23:42:48 np0005531754 dracut[1291]: *** Resolving executable dependencies done ***
Nov 21 23:42:48 np0005531754 dracut[1291]: *** Generating early-microcode cpio image ***
Nov 21 23:42:48 np0005531754 dracut[1291]: *** Store current command line parameters ***
Nov 21 23:42:48 np0005531754 dracut[1291]: Stored kernel commandline:
Nov 21 23:42:48 np0005531754 dracut[1291]: No dracut internal kernel commandline stored in the initramfs
Nov 21 23:42:48 np0005531754 dracut[1291]: *** Install squash loader ***
Nov 21 23:42:49 np0005531754 dracut[1291]: *** Squashing the files inside the initramfs ***
Nov 21 23:42:50 np0005531754 dracut[1291]: *** Squashing the files inside the initramfs done ***
Nov 21 23:42:50 np0005531754 dracut[1291]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 21 23:42:50 np0005531754 dracut[1291]: *** Hardlinking files ***
Nov 21 23:42:50 np0005531754 dracut[1291]: *** Hardlinking files done ***
Nov 21 23:42:50 np0005531754 dracut[1291]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 21 23:42:51 np0005531754 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Nov 21 23:42:51 np0005531754 kdumpctl[1014]: kdump: Starting kdump: [OK]
Nov 21 23:42:51 np0005531754 systemd[1]: Finished Crash recovery kernel arming.
Nov 21 23:42:51 np0005531754 systemd[1]: Startup finished in 1.591s (kernel) + 2.997s (initrd) + 20.766s (userspace) = 25.355s.
Nov 21 23:43:04 np0005531754 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 21 23:43:06 np0005531754 systemd[1]: Created slice User Slice of UID 1000.
Nov 21 23:43:06 np0005531754 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 21 23:43:06 np0005531754 systemd-logind[798]: New session 1 of user zuul.
Nov 21 23:43:06 np0005531754 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 21 23:43:06 np0005531754 systemd[1]: Starting User Manager for UID 1000...
Nov 21 23:43:06 np0005531754 systemd[4302]: Queued start job for default target Main User Target.
Nov 21 23:43:06 np0005531754 systemd[4302]: Created slice User Application Slice.
Nov 21 23:43:06 np0005531754 systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 21 23:43:06 np0005531754 systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Nov 21 23:43:06 np0005531754 systemd[4302]: Reached target Paths.
Nov 21 23:43:06 np0005531754 systemd[4302]: Reached target Timers.
Nov 21 23:43:06 np0005531754 systemd[4302]: Starting D-Bus User Message Bus Socket...
Nov 21 23:43:06 np0005531754 systemd[4302]: Starting Create User's Volatile Files and Directories...
Nov 21 23:43:06 np0005531754 systemd[4302]: Finished Create User's Volatile Files and Directories.
Nov 21 23:43:06 np0005531754 systemd[4302]: Listening on D-Bus User Message Bus Socket.
Nov 21 23:43:06 np0005531754 systemd[4302]: Reached target Sockets.
Nov 21 23:43:06 np0005531754 systemd[4302]: Reached target Basic System.
Nov 21 23:43:06 np0005531754 systemd[4302]: Reached target Main User Target.
Nov 21 23:43:06 np0005531754 systemd[4302]: Startup finished in 229ms.
Nov 21 23:43:06 np0005531754 systemd[1]: Started User Manager for UID 1000.
Nov 21 23:43:07 np0005531754 systemd[1]: Started Session 1 of User zuul.
Nov 21 23:43:07 np0005531754 python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 21 23:43:10 np0005531754 python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 21 23:43:16 np0005531754 python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 21 23:43:17 np0005531754 python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 21 23:43:19 np0005531754 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBpUWqzN7rKnv/+ddt39fBVp0U+zbBXsmG93ls34HyWhLWKB/ajnai0sKL5TB5FWWUSInTMxoNNLgm1UlPKTui3jEvx7cA8oUrOI+sUharb/CsGk33xi4JXPppoauT2w0NMmnoIOlYiN9tGg7anp1XQDD7pu+J6Xr1NqJUceEcm/yz7o+AG4RoW+jQozuApioBPhMkEnO/ss7iAGQuSWghuxIURVUnTmZWxyYDyQkHEbnNr1RddXUKURwQnTRkwtzS0+b5DzwH1+YfNxomFjO+6ThSY/fEU+EvHoUdwGCqHGPw1TPC9Oq/n4iRkRi2YNW7beU9LZatBiGBXwXYkuL+QgGxLCoJuQ/PAk+d72wXT70X0iT9VvAmpsoqE9/Ld3x8ec4EnIsok9d8l3MnYnUn2OdXXVKtkyr1xYkULUS4sewXcDd5Vwij/jjVeWt5WN1bJTaxU9RDgEmuG1DJaGRY1el2Z9kcqtGsGjUmzgsVKAr/x4ISz6yF91AKyuhOKyk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:19 np0005531754 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:20 np0005531754 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:20 np0005531754 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786599.8293571-207-193904859484822/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=601f438a9b1842e89a60d702fa83ffa8_id_rsa follow=False checksum=33ab2b9faf404664c2a582d5d25b5bbcf9a6dc98 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:21 np0005531754 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:21 np0005531754 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786600.9384837-240-256143430379336/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=601f438a9b1842e89a60d702fa83ffa8_id_rsa.pub follow=False checksum=9c9dcf22f193e28444145b04c9ea0edc70a98c3b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:23 np0005531754 python3[4972]: ansible-ping Invoked with data=pong
Nov 21 23:43:24 np0005531754 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 21 23:43:25 np0005531754 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 21 23:43:26 np0005531754 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:27 np0005531754 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:27 np0005531754 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:27 np0005531754 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:28 np0005531754 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:28 np0005531754 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:30 np0005531754 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:31 np0005531754 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:31 np0005531754 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786610.544543-21-26045239449004/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:32 np0005531754 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:32 np0005531754 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:32 np0005531754 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:33 np0005531754 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:33 np0005531754 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:33 np0005531754 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:34 np0005531754 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:34 np0005531754 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:34 np0005531754 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:35 np0005531754 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:35 np0005531754 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:35 np0005531754 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:36 np0005531754 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:36 np0005531754 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:36 np0005531754 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:36 np0005531754 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:37 np0005531754 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:37 np0005531754 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:37 np0005531754 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:38 np0005531754 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:38 np0005531754 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:38 np0005531754 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:38 np0005531754 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:39 np0005531754 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:39 np0005531754 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:39 np0005531754 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:43:42 np0005531754 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 21 23:43:42 np0005531754 systemd[1]: Starting Time & Date Service...
Nov 21 23:43:42 np0005531754 systemd[1]: Started Time & Date Service.
Nov 21 23:43:42 np0005531754 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 21 23:43:43 np0005531754 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:44 np0005531754 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:44 np0005531754 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763786624.0445921-153-149998034230791/source _original_basename=tmp73cujooz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:45 np0005531754 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:45 np0005531754 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763786625.0436249-183-78888262236887/source _original_basename=tmp5phomsql follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:46 np0005531754 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:47 np0005531754 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763786626.3207955-231-119099449908049/source _original_basename=tmp_sojauz2 follow=False checksum=6bf095e75b543d66829428b8a294812d38465cfe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:47 np0005531754 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:43:48 np0005531754 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:43:48 np0005531754 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:43:49 np0005531754 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786628.274287-273-232778249592164/source _original_basename=tmpl3bvw0ip follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:43:49 np0005531754 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-18aa-7dab-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:43:50 np0005531754 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-18aa-7dab-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 21 23:43:51 np0005531754 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:44:09 np0005531754 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:44:12 np0005531754 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 21 23:44:46 np0005531754 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 21 23:44:46 np0005531754 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7357] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 21 23:44:46 np0005531754 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7609] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7641] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7647] device (eth1): carrier: link connected
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7649] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7656] policy: auto-activating connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7661] device (eth1): Activation: starting connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7662] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7666] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7671] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 21 23:44:46 np0005531754 NetworkManager[858]: <info>  [1763786686.7676] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:44:48 np0005531754 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-b797-5a6b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:44:58 np0005531754 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:44:58 np0005531754 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786697.712137-102-67995338701127/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=9581fc4aa17865d42e880f9feece8ad2d131da8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:44:59 np0005531754 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 21 23:44:59 np0005531754 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 21 23:44:59 np0005531754 systemd[1]: Stopped Network Manager Wait Online.
Nov 21 23:44:59 np0005531754 systemd[1]: Stopping Network Manager Wait Online...
Nov 21 23:44:59 np0005531754 systemd[1]: Stopping Network Manager...
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4754] caught SIGTERM, shutting down normally.
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4770] dhcp4 (eth0): canceled DHCP transaction
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4771] dhcp4 (eth0): state changed no lease
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4779] manager: NetworkManager state is now CONNECTING
Nov 21 23:44:59 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4985] dhcp4 (eth1): canceled DHCP transaction
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.4986] dhcp4 (eth1): state changed no lease
Nov 21 23:44:59 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 21 23:44:59 np0005531754 NetworkManager[858]: <info>  [1763786699.5270] exiting (success)
Nov 21 23:44:59 np0005531754 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 21 23:44:59 np0005531754 systemd[1]: Stopped Network Manager.
Nov 21 23:44:59 np0005531754 systemd[1]: NetworkManager.service: Consumed 1.375s CPU time, 10.0M memory peak.
Nov 21 23:44:59 np0005531754 systemd[1]: Starting Network Manager...
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.6213] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.6214] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.6282] manager[0x5643cc391070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 21 23:44:59 np0005531754 systemd[1]: Starting Hostname Service...
Nov 21 23:44:59 np0005531754 systemd[1]: Started Hostname Service.
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7443] hostname: hostname: using hostnamed
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7446] hostname: static hostname changed from (none) to "np0005531754.novalocal"
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7454] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7460] manager[0x5643cc391070]: rfkill: Wi-Fi hardware radio set enabled
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7461] manager[0x5643cc391070]: rfkill: WWAN hardware radio set enabled
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7508] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7508] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7510] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7511] manager: Networking is enabled by state file
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7516] settings: Loaded settings plugin: keyfile (internal)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7524] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7571] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7589] dhcp: init: Using DHCP client 'internal'
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7594] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7604] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7614] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7629] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7641] device (eth0): carrier: link connected
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7649] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7658] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7659] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7673] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7686] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7698] device (eth1): carrier: link connected
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7705] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7715] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18) (indicated)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7716] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7726] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7737] device (eth1): Activation: starting connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7747] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 21 23:44:59 np0005531754 systemd[1]: Started Network Manager.
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7755] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7759] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7762] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7766] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7771] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7775] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7779] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7782] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7795] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7799] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7811] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7816] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7840] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7848] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7857] device (lo): Activation: successful, device activated.
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7869] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.7881] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 21 23:44:59 np0005531754 systemd[1]: Starting Network Manager Wait Online...
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8822] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8858] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8862] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8870] manager: NetworkManager state is now CONNECTED_SITE
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8879] device (eth0): Activation: successful, device activated.
Nov 21 23:44:59 np0005531754 NetworkManager[7192]: <info>  [1763786699.8888] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 21 23:45:00 np0005531754 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-b797-5a6b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:45:09 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 21 23:45:29 np0005531754 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.5669] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 21 23:45:44 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 21 23:45:44 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.5985] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.5988] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.5999] device (eth1): Activation: successful, device activated.
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6009] manager: startup complete
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6011] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <warn>  [1763786744.6018] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6031] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 systemd[1]: Finished Network Manager Wait Online.
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6135] dhcp4 (eth1): canceled DHCP transaction
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6136] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6136] dhcp4 (eth1): state changed no lease
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6156] policy: auto-activating connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6163] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6165] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6173] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6183] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6197] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6248] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6251] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 21 23:45:44 np0005531754 NetworkManager[7192]: <info>  [1763786744.6261] device (eth1): Activation: successful, device activated.
Nov 21 23:45:54 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 21 23:45:57 np0005531754 systemd[4302]: Starting Mark boot as successful...
Nov 21 23:45:57 np0005531754 systemd[4302]: Finished Mark boot as successful.
Nov 21 23:45:58 np0005531754 python3[7365]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:45:59 np0005531754 python3[7438]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786758.2375803-267-167195522605787/source _original_basename=tmpcngdf3al follow=False checksum=1c0a7dfd166548c9d6844776e0079fa19f47fafb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:46:59 np0005531754 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Nov 21 23:48:57 np0005531754 systemd[4302]: Created slice User Background Tasks Slice.
Nov 21 23:48:57 np0005531754 systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Nov 21 23:48:57 np0005531754 systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Nov 21 23:55:02 np0005531754 systemd-logind[798]: New session 3 of user zuul.
Nov 21 23:55:02 np0005531754 systemd[1]: Started Session 3 of User zuul.
Nov 21 23:55:02 np0005531754 python3[8229]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-409c-f072-000000001cc6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:02 np0005531754 python3[8257]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:03 np0005531754 python3[8284]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:03 np0005531754 python3[8312]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:03 np0005531754 python3[8338]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:04 np0005531754 python3[8364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:04 np0005531754 python3[8444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:55:05 np0005531754 python3[8517]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763787304.4416366-476-246523104748827/source _original_basename=tmpy7t_n1x5 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:55:06 np0005531754 python3[8567]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 21 23:55:06 np0005531754 systemd[1]: Reloading.
Nov 21 23:55:06 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 21 23:55:07 np0005531754 python3[8627]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 21 23:55:07 np0005531754 python3[8653]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:08 np0005531754 python3[8681]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:08 np0005531754 python3[8709]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:08 np0005531754 python3[8739]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:09 np0005531754 python3[8766]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-409c-f072-000000001ccd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:55:09 np0005531754 python3[8796]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 21 23:55:11 np0005531754 systemd[1]: session-3.scope: Deactivated successfully.
Nov 21 23:55:11 np0005531754 systemd[1]: session-3.scope: Consumed 4.328s CPU time.
Nov 21 23:55:11 np0005531754 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Nov 21 23:55:11 np0005531754 systemd-logind[798]: Removed session 3.
Nov 21 23:55:13 np0005531754 systemd-logind[798]: New session 4 of user zuul.
Nov 21 23:55:13 np0005531754 systemd[1]: Started Session 4 of User zuul.
Nov 21 23:55:13 np0005531754 python3[8836]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 21 23:55:23 np0005531754 irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 21 23:55:23 np0005531754 irqbalance[791]: IRQ 27 affinity is now unmanaged
Nov 21 23:55:59 np0005531754 kernel: SELinux:  Converting 385 SID table entries...
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 21 23:55:59 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  Converting 385 SID table entries...
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 21 23:56:18 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  Converting 385 SID table entries...
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 21 23:56:29 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 21 23:56:33 np0005531754 setsebool[8997]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 21 23:56:33 np0005531754 setsebool[8997]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 21 23:56:48 np0005531754 kernel: SELinux:  Converting 388 SID table entries...
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 21 23:56:48 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 21 23:57:26 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 21 23:57:26 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 21 23:57:26 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 21 23:57:26 np0005531754 systemd[1]: Reloading.
Nov 21 23:57:26 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 21 23:57:26 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 21 23:57:29 np0005531754 python3[10721]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-6bc6-f53b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 21 23:57:30 np0005531754 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 21 23:57:30 np0005531754 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 21 23:57:30 np0005531754 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 21 23:57:30 np0005531754 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 21 23:57:30 np0005531754 kernel: evm: overlay not supported
Nov 21 23:57:30 np0005531754 systemd[4302]: Starting D-Bus User Message Bus...
Nov 21 23:57:30 np0005531754 dbus-broker-launch[11661]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 21 23:57:30 np0005531754 dbus-broker-launch[11661]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 21 23:57:30 np0005531754 systemd[4302]: Started D-Bus User Message Bus.
Nov 21 23:57:30 np0005531754 dbus-broker-lau[11661]: Ready
Nov 21 23:57:30 np0005531754 systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 21 23:57:30 np0005531754 systemd[4302]: Created slice Slice /user.
Nov 21 23:57:30 np0005531754 systemd[4302]: podman-11503.scope: unit configures an IP firewall, but not running as root.
Nov 21 23:57:30 np0005531754 systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Nov 21 23:57:30 np0005531754 systemd[4302]: Started podman-11503.scope.
Nov 21 23:57:30 np0005531754 systemd[4302]: Started podman-pause-a028a28b.scope.
Nov 21 23:57:31 np0005531754 python3[12538]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.5:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.5:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:57:31 np0005531754 python3[12538]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 21 23:57:31 np0005531754 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Nov 21 23:57:31 np0005531754 systemd[1]: session-4.scope: Deactivated successfully.
Nov 21 23:57:31 np0005531754 systemd[1]: session-4.scope: Consumed 1min 2.227s CPU time.
Nov 21 23:57:31 np0005531754 systemd-logind[798]: Removed session 4.
Nov 21 23:57:57 np0005531754 systemd-logind[798]: New session 5 of user zuul.
Nov 21 23:57:57 np0005531754 systemd[1]: Started Session 5 of User zuul.
Nov 21 23:57:57 np0005531754 python3[19751]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:57:58 np0005531754 python3[20028]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:57:59 np0005531754 python3[20296]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005531754.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 21 23:57:59 np0005531754 python3[20502]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 21 23:58:00 np0005531754 python3[20746]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 21 23:58:00 np0005531754 python3[20984]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763787479.8339-135-174447271139055/source _original_basename=tmpzd1wl7fv follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 21 23:58:01 np0005531754 python3[21296]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 21 23:58:01 np0005531754 systemd[1]: Starting Hostname Service...
Nov 21 23:58:01 np0005531754 systemd[1]: Started Hostname Service.
Nov 21 23:58:01 np0005531754 systemd-hostnamed[21396]: Changed pretty hostname to 'compute-0'
Nov 21 23:58:01 np0005531754 systemd-hostnamed[21396]: Hostname set to <compute-0> (static)
Nov 21 23:58:01 np0005531754 NetworkManager[7192]: <info>  [1763787481.7307] hostname: static hostname changed from "np0005531754.novalocal" to "compute-0"
Nov 21 23:58:01 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 21 23:58:01 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 21 23:58:02 np0005531754 systemd[1]: session-5.scope: Deactivated successfully.
Nov 21 23:58:02 np0005531754 systemd[1]: session-5.scope: Consumed 2.720s CPU time.
Nov 21 23:58:02 np0005531754 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Nov 21 23:58:02 np0005531754 systemd-logind[798]: Removed session 5.
Nov 21 23:58:11 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 21 23:58:30 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 21 23:58:30 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 21 23:58:30 np0005531754 systemd[1]: man-db-cache-update.service: Consumed 1min 3.329s CPU time.
Nov 21 23:58:30 np0005531754 systemd[1]: run-r082c3a959f1c4c8180532759aca638a0.service: Deactivated successfully.
Nov 21 23:58:31 np0005531754 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 00:02:46 np0005531754 systemd-logind[798]: New session 6 of user zuul.
Nov 22 00:02:46 np0005531754 systemd[1]: Started Session 6 of User zuul.
Nov 22 00:02:47 np0005531754 python3[30855]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:02:48 np0005531754 python3[30971]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:49 np0005531754 python3[31044]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:49 np0005531754 python3[31070]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:50 np0005531754 python3[31143]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:50 np0005531754 python3[31169]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:51 np0005531754 python3[31242]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:51 np0005531754 python3[31268]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:52 np0005531754 python3[31341]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:52 np0005531754 python3[31367]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:52 np0005531754 python3[31440]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:53 np0005531754 python3[31466]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:53 np0005531754 python3[31539]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:02:53 np0005531754 python3[31565]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:02:54 np0005531754 python3[31638]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:03:06 np0005531754 python3[31696]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:08:06 np0005531754 systemd[1]: session-6.scope: Deactivated successfully.
Nov 22 00:08:06 np0005531754 systemd[1]: session-6.scope: Consumed 5.881s CPU time.
Nov 22 00:08:06 np0005531754 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Nov 22 00:08:06 np0005531754 systemd-logind[798]: Removed session 6.
Nov 22 00:15:27 np0005531754 systemd-logind[798]: New session 7 of user zuul.
Nov 22 00:15:27 np0005531754 systemd[1]: Started Session 7 of User zuul.
Nov 22 00:15:28 np0005531754 python3.9[31875]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:15:30 np0005531754 python3.9[32056]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:15:38 np0005531754 systemd[1]: session-7.scope: Deactivated successfully.
Nov 22 00:15:38 np0005531754 systemd[1]: session-7.scope: Consumed 8.245s CPU time.
Nov 22 00:15:38 np0005531754 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Nov 22 00:15:38 np0005531754 systemd-logind[798]: Removed session 7.
Nov 22 00:15:55 np0005531754 systemd-logind[798]: New session 8 of user zuul.
Nov 22 00:15:55 np0005531754 systemd[1]: Started Session 8 of User zuul.
Nov 22 00:15:55 np0005531754 python3.9[32267]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 00:15:57 np0005531754 python3.9[32441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:15:58 np0005531754 python3.9[32593]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:15:59 np0005531754 python3.9[32746]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:16:00 np0005531754 python3.9[32898]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:16:00 np0005531754 python3.9[33050]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:16:01 np0005531754 python3.9[33173]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788560.361447-73-225012149061364/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:16:02 np0005531754 python3.9[33325]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:16:03 np0005531754 python3.9[33481]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:16:03 np0005531754 python3.9[33633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:16:04 np0005531754 python3.9[33783]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:16:08 np0005531754 python3.9[34036]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:16:09 np0005531754 python3.9[34186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:16:10 np0005531754 python3.9[34340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:16:11 np0005531754 python3.9[34498]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:16:12 np0005531754 python3.9[34582]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:16:55 np0005531754 systemd[1]: Reloading.
Nov 22 00:16:55 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:16:55 np0005531754 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 22 00:16:55 np0005531754 systemd[1]: Reloading.
Nov 22 00:16:55 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:16:55 np0005531754 systemd[1]: Starting dnf makecache...
Nov 22 00:16:55 np0005531754 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 22 00:16:55 np0005531754 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 22 00:16:55 np0005531754 systemd[1]: Reloading.
Nov 22 00:16:55 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:16:56 np0005531754 dnf[34831]: Failed determining last makecache time.
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-barbican-42b4c41831408a8e323 115 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 145 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-cinder-1c00d6490d88e436f26ef  14 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-stevedore-c4acc5639fd2329372142 141 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-observabilityclient-2f31846d73c 129 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:16:56 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-os-net-config-bbae2ed8a159b0435a473f38 134 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 137 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-designate-tests-tempest-347fdbc 139 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-glance-1fd12c29b339f30fe823e 137 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 107 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-manila-3c01b7181572c95dac462 153 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-whitebox-neutron-tests-tempest- 157 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-octavia-ba397f07a7331190208c 152 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-watcher-c014f81a8647287f6dcc 152 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-tcib-1124124ec06aadbac34f0d340b 164 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 154 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-swift-dc98a8463506ac520c469a 141 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-python-tempestconf-8515371b7cceebd4282 143 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: delorean-openstack-heat-ui-013accbfd179753bc3f0 142 kB/s | 3.0 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Nov 22 00:16:56 np0005531754 dnf[34831]: CentOS Stream 9 - AppStream                      76 kB/s | 7.4 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: CentOS Stream 9 - CRB                            45 kB/s | 7.2 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: CentOS Stream 9 - Extras packages                67 kB/s | 8.3 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: dlrn-antelope-testing                           110 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: dlrn-antelope-build-deps                        125 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: centos9-rabbitmq                                110 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: centos9-storage                                 115 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: centos9-opstools                                115 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: NFV SIG OpenvSwitch                             109 kB/s | 3.0 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: repo-setup-centos-appstream                     162 kB/s | 4.4 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: repo-setup-centos-baseos                        191 kB/s | 3.9 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: repo-setup-centos-highavailability              176 kB/s | 3.9 kB     00:00
Nov 22 00:16:57 np0005531754 dnf[34831]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Nov 22 00:16:58 np0005531754 dnf[34831]: Extra Packages for Enterprise Linux 9 - x86_64  288 kB/s |  33 kB     00:00
Nov 22 00:17:00 np0005531754 dnf[34831]: Extra Packages for Enterprise Linux 9 - x86_64  9.3 MB/s |  20 MB     00:02
Nov 22 00:17:08 np0005531754 dnf[34831]: Metadata cache created.
Nov 22 00:17:09 np0005531754 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 00:17:09 np0005531754 systemd[1]: Finished dnf makecache.
Nov 22 00:17:09 np0005531754 systemd[1]: dnf-makecache.service: Consumed 10.231s CPU time.
Nov 22 00:18:01 np0005531754 kernel: SELinux:  Converting 2719 SID table entries...
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:18:01 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:18:01 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 22 00:18:02 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:18:02 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:18:02 np0005531754 systemd[1]: Reloading.
Nov 22 00:18:02 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:18:02 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:18:03 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:18:03 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:18:03 np0005531754 systemd[1]: man-db-cache-update.service: Consumed 1.349s CPU time.
Nov 22 00:18:03 np0005531754 systemd[1]: run-r222ca1f2ca3048cda84e1c39139a18fc.service: Deactivated successfully.
Nov 22 00:18:03 np0005531754 python3.9[36143]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:06 np0005531754 python3.9[36425]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 00:18:07 np0005531754 python3.9[36577]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 00:18:09 np0005531754 python3.9[36731]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:18:10 np0005531754 python3.9[36883]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 00:18:12 np0005531754 python3.9[37035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:18:12 np0005531754 python3.9[37187]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:18:16 np0005531754 python3.9[37310]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788692.2808309-236-78822947103085/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:18:18 np0005531754 python3.9[37462]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:18:18 np0005531754 python3.9[37614]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:19 np0005531754 python3.9[37767]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:18:20 np0005531754 python3.9[37919]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 00:18:20 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:18:21 np0005531754 python3.9[38073]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:18:22 np0005531754 python3.9[38231]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 00:18:23 np0005531754 python3.9[38391]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 00:18:24 np0005531754 python3.9[38544]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:18:25 np0005531754 python3.9[38702]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 00:18:26 np0005531754 python3.9[38854]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:18:28 np0005531754 python3.9[39007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:18:29 np0005531754 python3.9[39159]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:18:29 np0005531754 python3.9[39282]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788708.616697-355-27907899729112/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:18:31 np0005531754 python3.9[39434]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:18:31 np0005531754 systemd[1]: Starting Load Kernel Modules...
Nov 22 00:18:31 np0005531754 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 22 00:18:31 np0005531754 kernel: Bridge firewalling registered
Nov 22 00:18:31 np0005531754 systemd-modules-load[39438]: Inserted module 'br_netfilter'
Nov 22 00:18:31 np0005531754 systemd[1]: Finished Load Kernel Modules.
Nov 22 00:18:31 np0005531754 python3.9[39593]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:18:32 np0005531754 python3.9[39716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788711.4162822-378-217784410404142/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:18:33 np0005531754 python3.9[39868]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:18:36 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:18:36 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:18:37 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:18:37 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:18:37 np0005531754 systemd[1]: Reloading.
Nov 22 00:18:37 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:18:37 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:18:38 np0005531754 python3.9[41237]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:18:39 np0005531754 python3.9[42228]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 00:18:40 np0005531754 python3.9[42989]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:18:40 np0005531754 python3.9[43843]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:41 np0005531754 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 00:18:41 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:18:41 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:18:41 np0005531754 systemd[1]: man-db-cache-update.service: Consumed 4.836s CPU time.
Nov 22 00:18:41 np0005531754 systemd[1]: run-r41a3292441ee4e3d89ab19df50219633.service: Deactivated successfully.
Nov 22 00:18:41 np0005531754 systemd[1]: Starting Authorization Manager...
Nov 22 00:18:41 np0005531754 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 00:18:41 np0005531754 polkitd[44246]: Started polkitd version 0.117
Nov 22 00:18:41 np0005531754 systemd[1]: Started Authorization Manager.
Nov 22 00:18:42 np0005531754 python3.9[44416]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:18:42 np0005531754 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 00:18:42 np0005531754 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 00:18:42 np0005531754 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 00:18:42 np0005531754 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 00:18:42 np0005531754 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 00:18:43 np0005531754 python3.9[44577]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 00:18:46 np0005531754 python3.9[44729]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:18:46 np0005531754 systemd[1]: Reloading.
Nov 22 00:18:46 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:18:47 np0005531754 python3.9[44917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:18:47 np0005531754 systemd[1]: Reloading.
Nov 22 00:18:47 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:18:48 np0005531754 python3.9[45106]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:49 np0005531754 python3.9[45259]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:49 np0005531754 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 22 00:18:50 np0005531754 python3.9[45412]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:52 np0005531754 python3.9[45574]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:18:53 np0005531754 python3.9[45727]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:18:53 np0005531754 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 00:18:53 np0005531754 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 00:18:53 np0005531754 systemd[1]: Stopping Apply Kernel Variables...
Nov 22 00:18:53 np0005531754 systemd[1]: Starting Apply Kernel Variables...
Nov 22 00:18:53 np0005531754 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 00:18:53 np0005531754 systemd[1]: Finished Apply Kernel Variables.
Nov 22 00:18:53 np0005531754 systemd[1]: session-8.scope: Deactivated successfully.
Nov 22 00:18:53 np0005531754 systemd[1]: session-8.scope: Consumed 2min 16.497s CPU time.
Nov 22 00:18:53 np0005531754 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Nov 22 00:18:53 np0005531754 systemd-logind[798]: Removed session 8.
Nov 22 00:18:58 np0005531754 systemd-logind[798]: New session 9 of user zuul.
Nov 22 00:18:58 np0005531754 systemd[1]: Started Session 9 of User zuul.
Nov 22 00:18:59 np0005531754 python3.9[45910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:19:00 np0005531754 python3.9[46066]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 00:19:01 np0005531754 python3.9[46219]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:19:02 np0005531754 python3.9[46377]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 00:19:03 np0005531754 python3.9[46537]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:19:04 np0005531754 python3.9[46621]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 00:19:07 np0005531754 python3.9[46784]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:19:20 np0005531754 kernel: SELinux:  Converting 2731 SID table entries...
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:19:20 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:19:21 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 22 00:19:21 np0005531754 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 22 00:19:24 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:19:24 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:19:24 np0005531754 systemd[1]: Reloading.
Nov 22 00:19:24 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:19:24 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:19:24 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:19:27 np0005531754 python3.9[47881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:19:27 np0005531754 systemd[1]: Reloading.
Nov 22 00:19:27 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:19:27 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:19:27 np0005531754 systemd[1]: Starting Open vSwitch Database Unit...
Nov 22 00:19:27 np0005531754 chown[47924]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 22 00:19:27 np0005531754 ovs-ctl[47929]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 22 00:19:28 np0005531754 ovs-ctl[47929]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-ctl[47929]: Starting ovsdb-server [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-vsctl[47978]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 22 00:19:28 np0005531754 ovs-vsctl[47994]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"772af8e6-0f26-443e-a044-9109439e729d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 22 00:19:28 np0005531754 ovs-ctl[47929]: Configuring Open vSwitch system IDs [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-vsctl[48001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 00:19:28 np0005531754 ovs-ctl[47929]: Enabling remote OVSDB managers [  OK  ]
Nov 22 00:19:28 np0005531754 systemd[1]: Started Open vSwitch Database Unit.
Nov 22 00:19:28 np0005531754 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 22 00:19:28 np0005531754 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 22 00:19:28 np0005531754 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 22 00:19:28 np0005531754 kernel: openvswitch: Open vSwitch switching datapath
Nov 22 00:19:28 np0005531754 ovs-ctl[48048]: Inserting openvswitch module [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-ctl[48017]: Starting ovs-vswitchd [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-ctl[48017]: Enabling remote OVSDB managers [  OK  ]
Nov 22 00:19:28 np0005531754 ovs-vsctl[48066]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 00:19:28 np0005531754 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 22 00:19:28 np0005531754 systemd[1]: Starting Open vSwitch...
Nov 22 00:19:28 np0005531754 systemd[1]: Finished Open vSwitch.
Nov 22 00:19:29 np0005531754 python3.9[48217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:19:29 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:19:29 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:19:29 np0005531754 systemd[1]: run-r2ded0afb5af74997a975c6ba23172d28.service: Deactivated successfully.
Nov 22 00:19:30 np0005531754 python3.9[48370]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 00:19:32 np0005531754 kernel: SELinux:  Converting 2745 SID table entries...
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:19:32 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:19:34 np0005531754 python3.9[48525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:19:34 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 22 00:19:35 np0005531754 python3.9[48683]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:19:37 np0005531754 python3.9[48836]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:19:39 np0005531754 python3.9[49123]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 00:19:40 np0005531754 python3.9[49273]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:19:40 np0005531754 python3.9[49427]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:19:43 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:19:43 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:19:43 np0005531754 systemd[1]: Reloading.
Nov 22 00:19:43 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:19:43 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:19:43 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:19:45 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:19:45 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:19:45 np0005531754 systemd[1]: run-re73195c3ad3849ffa81f41914916eecd.service: Deactivated successfully.
Nov 22 00:19:45 np0005531754 python3.9[49742]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:19:45 np0005531754 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 00:19:45 np0005531754 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 00:19:45 np0005531754 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 00:19:45 np0005531754 systemd[1]: Stopping Network Manager...
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.6948] caught SIGTERM, shutting down normally.
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): canceled DHCP transaction
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): state changed no lease
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.6963] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 00:19:45 np0005531754 NetworkManager[7192]: <info>  [1763788785.7035] exiting (success)
Nov 22 00:19:45 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 00:19:45 np0005531754 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 00:19:45 np0005531754 systemd[1]: Stopped Network Manager.
Nov 22 00:19:45 np0005531754 systemd[1]: NetworkManager.service: Consumed 19.078s CPU time, 4.1M memory peak, read 0B from disk, written 45.5K to disk.
Nov 22 00:19:45 np0005531754 systemd[1]: Starting Network Manager...
Nov 22 00:19:45 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.7561] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.7565] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.7622] manager[0x55a58f847090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 00:19:45 np0005531754 systemd[1]: Starting Hostname Service...
Nov 22 00:19:45 np0005531754 systemd[1]: Started Hostname Service.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8529] hostname: hostname: using hostnamed
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8532] hostname: static hostname changed from (none) to "compute-0"
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8539] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8544] manager[0x55a58f847090]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8545] manager[0x55a58f847090]: rfkill: WWAN hardware radio set enabled
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8566] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8574] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8575] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8576] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8577] manager: Networking is enabled by state file
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8580] settings: Loaded settings plugin: keyfile (internal)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8584] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8613] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8626] dhcp: init: Using DHCP client 'internal'
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8629] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8634] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8641] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8648] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8654] device (eth0): carrier: link connected
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8658] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8663] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8664] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8671] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8678] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8684] device (eth1): carrier: link connected
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8688] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8694] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b) (indicated)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8695] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8701] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8707] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 00:19:45 np0005531754 systemd[1]: Started Network Manager.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8713] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8722] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8725] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8727] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8730] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8732] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8735] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8738] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8741] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8756] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8759] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8766] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8777] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8785] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8787] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8794] device (lo): Activation: successful, device activated.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8800] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8806] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 00:19:45 np0005531754 systemd[1]: Starting Network Manager Wait Online...
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8869] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8874] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8886] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8890] device (eth1): Activation: successful, device activated.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8913] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8916] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8921] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8924] device (eth0): Activation: successful, device activated.
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8928] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 00:19:45 np0005531754 NetworkManager[49751]: <info>  [1763788785.8931] manager: startup complete
Nov 22 00:19:45 np0005531754 systemd[1]: Finished Network Manager Wait Online.
Nov 22 00:19:46 np0005531754 python3.9[49969]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:19:52 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:19:52 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:19:52 np0005531754 systemd[1]: Reloading.
Nov 22 00:19:53 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:19:53 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:19:53 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:19:54 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:19:54 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:19:54 np0005531754 systemd[1]: run-rd5fac863e67c40f9b184d3766b70390a.service: Deactivated successfully.
Nov 22 00:19:55 np0005531754 python3.9[50430]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:19:55 np0005531754 python3.9[50582]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:19:56 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 00:19:56 np0005531754 python3.9[50736]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:19:57 np0005531754 python3.9[50888]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:19:58 np0005531754 python3.9[51040]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:19:58 np0005531754 python3.9[51192]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:19:59 np0005531754 python3.9[51344]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:20:00 np0005531754 python3.9[51467]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788798.8677533-229-186801681601953/.source _original_basename=.5hom73rt follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:00 np0005531754 python3.9[51619]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:01 np0005531754 python3.9[51771]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 22 00:20:02 np0005531754 python3.9[51923]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:04 np0005531754 python3.9[52352]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 22 00:20:05 np0005531754 ansible-async_wrapper.py[52527]: Invoked with j507233999034 300 /home/zuul/.ansible/tmp/ansible-tmp-1763788804.8969402-295-247995704436509/AnsiballZ_edpm_os_net_config.py _
Nov 22 00:20:05 np0005531754 ansible-async_wrapper.py[52530]: Starting module and watcher
Nov 22 00:20:05 np0005531754 ansible-async_wrapper.py[52530]: Start watching 52531 (300)
Nov 22 00:20:05 np0005531754 ansible-async_wrapper.py[52531]: Start module (52531)
Nov 22 00:20:05 np0005531754 ansible-async_wrapper.py[52527]: Return async_wrapper task started.
Nov 22 00:20:05 np0005531754 python3.9[52532]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 22 00:20:06 np0005531754 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 22 00:20:06 np0005531754 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 22 00:20:06 np0005531754 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 22 00:20:06 np0005531754 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 22 00:20:06 np0005531754 kernel: cfg80211: failed to load regulatory.db
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9067] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9081] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9613] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9614] audit: op="connection-add" uuid="e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8" name="br-ex-br" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9629] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9630] audit: op="connection-add" uuid="6dbe5176-d69d-43d2-ae99-9cb9a4536e43" name="br-ex-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9642] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9643] audit: op="connection-add" uuid="523734da-6a1f-4d94-a351-29cd9eb90d3d" name="eth1-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9654] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9655] audit: op="connection-add" uuid="d861a3cf-32db-4ec4-8a45-c6981b01c962" name="vlan20-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9666] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9668] audit: op="connection-add" uuid="fcdea826-d26b-4a70-b0cc-3b8c842ef2cb" name="vlan21-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9678] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9679] audit: op="connection-add" uuid="5a268958-41b7-4275-9191-e164fec00046" name="vlan22-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9692] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9693] audit: op="connection-add" uuid="4a0634c3-6b1d-4758-a608-f5568b835867" name="vlan23-port" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9712] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9727] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9728] audit: op="connection-add" uuid="ca2e9ad1-e067-48e3-86e8-8c179e3a623c" name="br-ex-if" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9753] audit: op="connection-update" uuid="8d97a97e-ce0a-5c97-95d5-8291b500636b" name="ci-private-network" args="ipv6.dns,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type,connection.controller,connection.slave-type,connection.master,connection.port-type,connection.timestamp,ipv4.dns,ipv4.addresses,ipv4.routing-rules,ipv4.method,ipv4.routes,ipv4.never-default" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9768] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9770] audit: op="connection-add" uuid="29f723c1-2b0e-4942-a473-20482ebe0a3e" name="vlan20-if" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9784] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9785] audit: op="connection-add" uuid="5a95e5dc-7a2e-4194-8554-f486f4eab24d" name="vlan21-if" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9801] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9803] audit: op="connection-add" uuid="11ff7f4c-cd22-4253-a231-ee878e554849" name="vlan22-if" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9816] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9818] audit: op="connection-add" uuid="6a4b13ae-dd6e-4a6a-88f1-cf7926d0a6ce" name="vlan23-if" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9830] audit: op="connection-delete" uuid="b63a3bd3-2d39-3e26-9215-4f6c298d6a18" name="Wired connection 1" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9859] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9873] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9877] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9878] audit: op="connection-activate" uuid="e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8" name="br-ex-br" pid=52533 uid=0 result="success"
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9879] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9885] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9889] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6dbe5176-d69d-43d2-ae99-9cb9a4536e43)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9890] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9895] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9900] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (523734da-6a1f-4d94-a351-29cd9eb90d3d)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9901] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9907] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9910] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d861a3cf-32db-4ec4-8a45-c6981b01c962)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9911] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9916] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9920] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (fcdea826-d26b-4a70-b0cc-3b8c842ef2cb)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9922] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9928] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9933] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5a268958-41b7-4275-9191-e164fec00046)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9935] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9941] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9946] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4a0634c3-6b1d-4758-a608-f5568b835867)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9946] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9949] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9951] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9961] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9965] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ca2e9ad1-e067-48e3-86e8-8c179e3a623c)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9966] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9969] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9970] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9972] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9973] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9983] device (eth1): disconnecting for new activation request.
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9984] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9986] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9988] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9988] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9991] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9994] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9997] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (29f723c1-2b0e-4942-a473-20482ebe0a3e)
Nov 22 00:20:07 np0005531754 NetworkManager[49751]: <info>  [1763788807.9998] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0000] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0001] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0002] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0005] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0007] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0013] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5a95e5dc-7a2e-4194-8554-f486f4eab24d)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0014] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0016] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0017] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0018] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0021] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0024] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0027] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (11ff7f4c-cd22-4253-a231-ee878e554849)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0028] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0030] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0032] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0032] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0034] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0037] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0040] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (6a4b13ae-dd6e-4a6a-88f1-cf7926d0a6ce)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0041] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0043] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0044] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0045] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0046] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0055] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=52533 uid=0 result="success"
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0056] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0058] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0059] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0064] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0067] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0069] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0071] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0072] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0076] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 kernel: ovs-system: entered promiscuous mode
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0088] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0092] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0094] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0099] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0103] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 systemd-udevd[52537]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 00:20:08 np0005531754 kernel: Timeout policy base is empty
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0106] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0107] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0111] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0115] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0119] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0121] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0126] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): canceled DHCP transaction
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): state changed no lease
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0132] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0142] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0146] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52533 uid=0 result="fail" reason="Device is not activated"
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0151] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0203] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0216] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0222] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0268] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0279] device (eth1): disconnecting for new activation request.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0279] audit: op="connection-activate" uuid="8d97a97e-ce0a-5c97-95d5-8291b500636b" name="ci-private-network" pid=52533 uid=0 result="success"
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0286] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 00:20:08 np0005531754 kernel: br-ex: entered promiscuous mode
Nov 22 00:20:08 np0005531754 kernel: vlan22: entered promiscuous mode
Nov 22 00:20:08 np0005531754 systemd-udevd[52539]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0531] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0535] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 kernel: vlan21: entered promiscuous mode
Nov 22 00:20:08 np0005531754 systemd-udevd[52638]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0558] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0561] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0566] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0575] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0590] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0596] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0597] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0598] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0599] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0599] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0600] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0601] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 kernel: vlan23: entered promiscuous mode
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0605] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0610] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0612] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0615] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0618] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0621] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0624] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0626] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0628] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0630] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0633] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0636] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0638] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0642] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 22 00:20:08 np0005531754 kernel: vlan20: entered promiscuous mode
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0655] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0658] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0666] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0677] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0686] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0695] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0739] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0740] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0741] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0746] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0750] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0769] device (eth1): Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0773] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0778] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0783] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0788] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0793] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0801] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0806] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0813] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0833] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0846] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0856] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0857] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0863] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0870] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0873] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 00:20:08 np0005531754 NetworkManager[49751]: <info>  [1763788808.0879] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.2270] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.4137] checkpoint[0x55a58f81d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.4140] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 python3.9[52893]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=status _async_dir=/root/.ansible_async
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.7333] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.7347] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.9787] audit: op="networking-control" arg="global-dns-configuration" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.9822] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.9856] audit: op="networking-control" arg="global-dns-configuration" pid=52533 uid=0 result="success"
Nov 22 00:20:09 np0005531754 NetworkManager[49751]: <info>  [1763788809.9883] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 00:20:10 np0005531754 NetworkManager[49751]: <info>  [1763788810.1219] checkpoint[0x55a58f81da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 22 00:20:10 np0005531754 NetworkManager[49751]: <info>  [1763788810.1223] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 00:20:10 np0005531754 ansible-async_wrapper.py[52531]: Module complete (52531)
Nov 22 00:20:10 np0005531754 ansible-async_wrapper.py[52530]: Done in kid B.
Nov 22 00:20:13 np0005531754 python3.9[52999]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=status _async_dir=/root/.ansible_async
Nov 22 00:20:13 np0005531754 python3.9[53099]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 00:20:14 np0005531754 python3.9[53251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:20:14 np0005531754 python3.9[53374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788813.8271189-322-191219942198960/.source.returncode _original_basename=.0qwnkb24 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:15 np0005531754 python3.9[53526]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:20:15 np0005531754 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 00:20:16 np0005531754 python3.9[53651]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788815.1091008-338-8598487192105/.source.cfg _original_basename=.jitq13se follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:16 np0005531754 python3.9[53804]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:20:16 np0005531754 systemd[1]: Reloading Network Manager...
Nov 22 00:20:17 np0005531754 NetworkManager[49751]: <info>  [1763788817.0029] audit: op="reload" arg="0" pid=53808 uid=0 result="success"
Nov 22 00:20:17 np0005531754 NetworkManager[49751]: <info>  [1763788817.0036] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 22 00:20:17 np0005531754 systemd[1]: Reloaded Network Manager.
Nov 22 00:20:17 np0005531754 systemd[1]: session-9.scope: Deactivated successfully.
Nov 22 00:20:17 np0005531754 systemd[1]: session-9.scope: Consumed 51.098s CPU time.
Nov 22 00:20:17 np0005531754 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Nov 22 00:20:17 np0005531754 systemd-logind[798]: Removed session 9.
Nov 22 00:20:23 np0005531754 systemd-logind[798]: New session 10 of user zuul.
Nov 22 00:20:23 np0005531754 systemd[1]: Started Session 10 of User zuul.
Nov 22 00:20:24 np0005531754 python3.9[53992]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:20:25 np0005531754 python3.9[54146]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:20:27 np0005531754 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 00:20:27 np0005531754 python3.9[54341]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:20:27 np0005531754 systemd[1]: session-10.scope: Deactivated successfully.
Nov 22 00:20:27 np0005531754 systemd[1]: session-10.scope: Consumed 2.624s CPU time.
Nov 22 00:20:27 np0005531754 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Nov 22 00:20:27 np0005531754 systemd-logind[798]: Removed session 10.
Nov 22 00:20:33 np0005531754 systemd-logind[798]: New session 11 of user zuul.
Nov 22 00:20:33 np0005531754 systemd[1]: Started Session 11 of User zuul.
Nov 22 00:20:34 np0005531754 python3.9[54522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:20:35 np0005531754 python3.9[54676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:20:36 np0005531754 python3.9[54833]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:20:37 np0005531754 python3.9[54917]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:20:39 np0005531754 python3.9[55071]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:20:40 np0005531754 python3.9[55266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:41 np0005531754 python3.9[55418]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:20:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-compat3380072165-merged.mount: Deactivated successfully.
Nov 22 00:20:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1235679625-merged.mount: Deactivated successfully.
Nov 22 00:20:41 np0005531754 podman[55419]: 2025-11-22 05:20:41.934892304 +0000 UTC m=+0.046162166 system refresh
Nov 22 00:20:42 np0005531754 python3.9[55581]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:20:42 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:20:43 np0005531754 python3.9[55704]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788842.1311836-79-179516530359643/.source.json follow=False _original_basename=podman_network_config.j2 checksum=285d677619700b868d4522aa7e8707442ef518c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:20:44 np0005531754 python3.9[55856]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:20:44 np0005531754 python3.9[55979]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788843.8885467-94-212673733395452/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5248920f79a1cb67b3ef013f523e4500b06a731f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:20:45 np0005531754 python3.9[56131]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:20:46 np0005531754 python3.9[56283]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:20:47 np0005531754 python3.9[56435]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:20:48 np0005531754 python3.9[56587]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:20:48 np0005531754 python3.9[56739]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:20:51 np0005531754 python3.9[56892]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:20:52 np0005531754 python3.9[57046]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:20:53 np0005531754 python3.9[57198]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:20:53 np0005531754 python3.9[57350]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:20:54 np0005531754 python3.9[57503]: ansible-service_facts Invoked
Nov 22 00:20:54 np0005531754 network[57520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:20:54 np0005531754 network[57521]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:20:54 np0005531754 network[57522]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:21:00 np0005531754 python3.9[57974]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:21:02 np0005531754 python3.9[58127]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 00:21:03 np0005531754 python3.9[58279]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:04 np0005531754 python3.9[58404]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788863.289942-238-197488353136691/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:05 np0005531754 python3.9[58558]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:05 np0005531754 python3.9[58683]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788864.6128426-253-224061885439696/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:06 np0005531754 python3.9[58837]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:08 np0005531754 python3.9[58991]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:21:09 np0005531754 python3.9[59075]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:10 np0005531754 python3.9[59229]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:21:11 np0005531754 python3.9[59313]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:21:11 np0005531754 chronyd[781]: chronyd exiting
Nov 22 00:21:11 np0005531754 systemd[1]: Stopping NTP client/server...
Nov 22 00:21:11 np0005531754 systemd[1]: chronyd.service: Deactivated successfully.
Nov 22 00:21:11 np0005531754 systemd[1]: Stopped NTP client/server.
Nov 22 00:21:11 np0005531754 systemd[1]: Starting NTP client/server...
Nov 22 00:21:11 np0005531754 chronyd[59321]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 00:21:11 np0005531754 chronyd[59321]: Frequency -25.829 +/- 0.171 ppm read from /var/lib/chrony/drift
Nov 22 00:21:11 np0005531754 chronyd[59321]: Loaded seccomp filter (level 2)
Nov 22 00:21:11 np0005531754 systemd[1]: Started NTP client/server.
Nov 22 00:21:11 np0005531754 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Nov 22 00:21:11 np0005531754 systemd[1]: session-11.scope: Deactivated successfully.
Nov 22 00:21:11 np0005531754 systemd[1]: session-11.scope: Consumed 27.360s CPU time.
Nov 22 00:21:11 np0005531754 systemd-logind[798]: Removed session 11.
Nov 22 00:21:17 np0005531754 systemd-logind[798]: New session 12 of user zuul.
Nov 22 00:21:17 np0005531754 systemd[1]: Started Session 12 of User zuul.
Nov 22 00:21:18 np0005531754 python3.9[59502]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:19 np0005531754 python3.9[59654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:19 np0005531754 python3.9[59777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788878.3566797-34-188032673110942/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:20 np0005531754 systemd[1]: session-12.scope: Deactivated successfully.
Nov 22 00:21:20 np0005531754 systemd[1]: session-12.scope: Consumed 1.767s CPU time.
Nov 22 00:21:20 np0005531754 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Nov 22 00:21:20 np0005531754 systemd-logind[798]: Removed session 12.
Nov 22 00:21:25 np0005531754 systemd-logind[798]: New session 13 of user zuul.
Nov 22 00:21:25 np0005531754 systemd[1]: Started Session 13 of User zuul.
Nov 22 00:21:26 np0005531754 python3.9[59955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:21:27 np0005531754 python3.9[60111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:28 np0005531754 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:29 np0005531754 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763788888.1421494-41-236861592226798/.source.json _original_basename=.m_sh_aqj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:30 np0005531754 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:31 np0005531754 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788889.9389427-64-70300357265889/.source _original_basename=.cpmx41u5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:31 np0005531754 python3.9[60836]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:21:32 np0005531754 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:32 np0005531754 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788891.8855565-88-249004461455160/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:21:33 np0005531754 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:34 np0005531754 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788893.0867453-88-114917941509815/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:21:34 np0005531754 python3.9[61538]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:35 np0005531754 python3.9[61690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:35 np0005531754 python3.9[61813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788894.8753633-125-253932159872018/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:36 np0005531754 python3.9[61965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:37 np0005531754 python3.9[62088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788896.0682867-140-65509616201285/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:38 np0005531754 python3.9[62240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:38 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:38 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:38 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:38 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:38 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:38 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:38 np0005531754 systemd[1]: Starting EDPM Container Shutdown...
Nov 22 00:21:38 np0005531754 systemd[1]: Finished EDPM Container Shutdown.
Nov 22 00:21:39 np0005531754 python3.9[62465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:39 np0005531754 python3.9[62588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788898.872449-163-74482882916275/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:40 np0005531754 python3.9[62740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:41 np0005531754 python3.9[62863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788900.1310968-178-194616479260024/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:42 np0005531754 python3.9[63015]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:42 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:42 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:42 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:42 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:42 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:42 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:42 np0005531754 systemd[1]: Starting Create netns directory...
Nov 22 00:21:42 np0005531754 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 00:21:42 np0005531754 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 00:21:42 np0005531754 systemd[1]: Finished Create netns directory.
Nov 22 00:21:43 np0005531754 python3.9[63242]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:21:43 np0005531754 network[63259]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:21:43 np0005531754 network[63260]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:21:43 np0005531754 network[63261]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:21:46 np0005531754 python3.9[63523]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:46 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:47 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:47 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:47 np0005531754 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 22 00:21:47 np0005531754 iptables.init[63563]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 22 00:21:47 np0005531754 iptables.init[63563]: iptables: Flushing firewall rules: [  OK  ]
Nov 22 00:21:47 np0005531754 systemd[1]: iptables.service: Deactivated successfully.
Nov 22 00:21:47 np0005531754 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 22 00:21:48 np0005531754 python3.9[63760]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:49 np0005531754 python3.9[63914]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:21:49 np0005531754 systemd[1]: Reloading.
Nov 22 00:21:49 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:21:49 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:21:49 np0005531754 systemd[1]: Starting Netfilter Tables...
Nov 22 00:21:49 np0005531754 systemd[1]: Finished Netfilter Tables.
Nov 22 00:21:51 np0005531754 python3.9[64106]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:21:51 np0005531754 python3.9[64259]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:52 np0005531754 python3.9[64384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788911.4621804-247-16553724278889/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:53 np0005531754 python3.9[64537]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:21:53 np0005531754 systemd[1]: Reloading OpenSSH server daemon...
Nov 22 00:21:53 np0005531754 systemd[1]: Reloaded OpenSSH server daemon.
Nov 22 00:21:54 np0005531754 python3.9[64693]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:54 np0005531754 python3.9[64845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:55 np0005531754 python3.9[64968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788914.3083594-278-91404843847875/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:56 np0005531754 python3.9[65120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 00:21:56 np0005531754 systemd[1]: Starting Time & Date Service...
Nov 22 00:21:56 np0005531754 systemd[1]: Started Time & Date Service.
Nov 22 00:21:57 np0005531754 python3.9[65276]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:58 np0005531754 python3.9[65428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:21:58 np0005531754 python3.9[65551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788917.7073703-313-103320882982555/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:21:59 np0005531754 python3.9[65703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:00 np0005531754 python3.9[65826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788919.1458921-328-116604979989003/.source.yaml _original_basename=.f55fmubb follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:00 np0005531754 python3.9[65978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:01 np0005531754 python3.9[66101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788920.3817027-343-5806126297593/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:02 np0005531754 python3.9[66253]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:02 np0005531754 python3.9[66406]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:03 np0005531754 python3[66559]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 00:22:04 np0005531754 python3.9[66711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:04 np0005531754 python3.9[66834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788923.8731472-382-84108130920977/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:05 np0005531754 python3.9[66986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:06 np0005531754 python3.9[67109]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788925.1576605-397-211062127273011/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:06 np0005531754 python3.9[67261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:07 np0005531754 python3.9[67384]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788926.4552882-412-3183530403795/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:08 np0005531754 python3.9[67536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:08 np0005531754 python3.9[67659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788927.768072-427-214302495441314/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:09 np0005531754 python3.9[67811]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:22:10 np0005531754 python3.9[67934]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788928.9330702-442-176112762379502/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:10 np0005531754 python3.9[68086]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:11 np0005531754 python3.9[68238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:12 np0005531754 python3.9[68397]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:13 np0005531754 python3.9[68550]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:13 np0005531754 python3.9[68702]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:14 np0005531754 python3.9[68854]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 00:22:15 np0005531754 python3.9[69007]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 00:22:15 np0005531754 systemd[1]: session-13.scope: Deactivated successfully.
Nov 22 00:22:15 np0005531754 systemd[1]: session-13.scope: Consumed 37.962s CPU time.
Nov 22 00:22:15 np0005531754 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Nov 22 00:22:15 np0005531754 systemd-logind[798]: Removed session 13.
Nov 22 00:22:21 np0005531754 systemd-logind[798]: New session 14 of user zuul.
Nov 22 00:22:21 np0005531754 systemd[1]: Started Session 14 of User zuul.
Nov 22 00:22:22 np0005531754 python3.9[69188]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 00:22:23 np0005531754 python3.9[69340]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:22:24 np0005531754 python3.9[69492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:22:25 np0005531754 python3.9[69644]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCit8LB4kN4s+ZkWj80X2HgMN9rqM53DLp82j+iZT/+7rzt4hXyml/QRwnRtRuhiMmFC20M8IvUEbNi1zKVVkcoHO/p5QkECCjKHEn1MqPis5D+QZQrGTeDLDkMrhuE8Pw5y61lJ5qm3EI6GZDRrUGmuVCEeJh9jpUQQ+8LlojrWycpo0svG9DIb8mUq1I1nCK8CeVIHkhCTc+F7OhSzzKJQHl5RrVX/K9kH0ak//kwjPdbyIHnB8JaTqci/DJPmcm4GxKKRNVErCrY3DBZNFCBt8iwjWu4MrqLv3iFLufwFed9mnoqLvVJGR8kDpmCdEKpNs8k6fls3xtt9j7NHMXOf4Xio2n+e3iS0eOEjoIKs/UMbDlHH7hqO/lx7Yv3YLgQtef4crGkOWxGILX2eOs5/1d6lgIzp04lzLy2oPlyJGb8bCwGvRMwojZNUO91mQkoO5vDssg6huJ8lBEWfxr8rao78xnahRc+m7sCEtI5n1VTqXAor62Z67+PFALoyi0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3PVl+DJPXhnIIicPnX2nTw410SH80rkcpaBLgvWfvA#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITMmQ16+iCw0/ZG0kuxaDVundusiLycQm50s7cZraLscE8RlmDWnFcRh+jIhL0lLGEyvuocxAlG/xRmMEF3zf8=#012 create=True mode=0644 path=/tmp/ansible.21tunqgh state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:26 np0005531754 python3.9[69796]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.21tunqgh' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:26 np0005531754 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 00:22:26 np0005531754 python3.9[69952]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.21tunqgh state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:27 np0005531754 systemd[1]: session-14.scope: Deactivated successfully.
Nov 22 00:22:27 np0005531754 systemd[1]: session-14.scope: Consumed 3.626s CPU time.
Nov 22 00:22:27 np0005531754 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Nov 22 00:22:27 np0005531754 systemd-logind[798]: Removed session 14.
Nov 22 00:22:32 np0005531754 systemd-logind[798]: New session 15 of user zuul.
Nov 22 00:22:32 np0005531754 systemd[1]: Started Session 15 of User zuul.
Nov 22 00:22:33 np0005531754 python3.9[70132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:22:34 np0005531754 python3.9[70288]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 00:22:35 np0005531754 python3.9[70442]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:22:36 np0005531754 python3.9[70595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:37 np0005531754 python3.9[70748]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:22:38 np0005531754 python3.9[70902]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:39 np0005531754 python3.9[71057]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:22:39 np0005531754 systemd[1]: session-15.scope: Deactivated successfully.
Nov 22 00:22:39 np0005531754 systemd[1]: session-15.scope: Consumed 5.045s CPU time.
Nov 22 00:22:39 np0005531754 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Nov 22 00:22:39 np0005531754 systemd-logind[798]: Removed session 15.
Nov 22 00:22:45 np0005531754 systemd-logind[798]: New session 16 of user zuul.
Nov 22 00:22:45 np0005531754 systemd[1]: Started Session 16 of User zuul.
Nov 22 00:22:46 np0005531754 python3.9[71235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:22:47 np0005531754 python3.9[71391]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:22:48 np0005531754 python3.9[71475]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 00:22:50 np0005531754 python3.9[71626]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:22:52 np0005531754 python3.9[71777]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:22:53 np0005531754 python3.9[71927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:22:53 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:22:54 np0005531754 python3.9[72078]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:22:54 np0005531754 systemd[1]: session-16.scope: Deactivated successfully.
Nov 22 00:22:54 np0005531754 systemd[1]: session-16.scope: Consumed 6.421s CPU time.
Nov 22 00:22:54 np0005531754 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Nov 22 00:22:54 np0005531754 systemd-logind[798]: Removed session 16.
Nov 22 00:23:01 np0005531754 systemd-logind[798]: New session 17 of user zuul.
Nov 22 00:23:01 np0005531754 systemd[1]: Started Session 17 of User zuul.
Nov 22 00:23:07 np0005531754 python3[72844]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:23:08 np0005531754 python3[72939]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 00:23:10 np0005531754 python3[72966]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:10 np0005531754 python3[72992]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:10 np0005531754 kernel: loop: module loaded
Nov 22 00:23:10 np0005531754 kernel: loop3: detected capacity change from 0 to 41943040
Nov 22 00:23:11 np0005531754 python3[73027]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:11 np0005531754 lvm[73030]: PV /dev/loop3 not used.
Nov 22 00:23:11 np0005531754 lvm[73039]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 00:23:11 np0005531754 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 22 00:23:11 np0005531754 lvm[73041]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 22 00:23:11 np0005531754 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 22 00:23:11 np0005531754 python3[73119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:23:12 np0005531754 python3[73192]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763788991.4528005-36104-101923152292173/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:12 np0005531754 python3[73242]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:23:12 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:13 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:13 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:13 np0005531754 systemd[1]: Starting Ceph OSD losetup...
Nov 22 00:23:13 np0005531754 bash[73282]: /dev/loop3: [64513]:4194941 (/var/lib/ceph-osd-0.img)
Nov 22 00:23:13 np0005531754 systemd[1]: Finished Ceph OSD losetup.
Nov 22 00:23:13 np0005531754 lvm[73283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 00:23:13 np0005531754 lvm[73283]: VG ceph_vg0 finished
Nov 22 00:23:13 np0005531754 python3[73309]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 00:23:15 np0005531754 python3[73336]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:15 np0005531754 python3[73362]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:15 np0005531754 kernel: loop4: detected capacity change from 0 to 41943040
Nov 22 00:23:15 np0005531754 python3[73394]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:15 np0005531754 lvm[73397]: PV /dev/loop4 not used.
Nov 22 00:23:15 np0005531754 lvm[73399]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 00:23:16 np0005531754 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 22 00:23:16 np0005531754 lvm[73410]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 00:23:16 np0005531754 lvm[73410]: VG ceph_vg1 finished
Nov 22 00:23:16 np0005531754 lvm[73408]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 22 00:23:16 np0005531754 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 22 00:23:16 np0005531754 python3[73488]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:23:17 np0005531754 python3[73561]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763788996.2851915-36131-152525462381567/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:17 np0005531754 python3[73611]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:23:17 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:17 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:17 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:17 np0005531754 systemd[1]: Starting Ceph OSD losetup...
Nov 22 00:23:17 np0005531754 bash[73652]: /dev/loop4: [64513]:4328008 (/var/lib/ceph-osd-1.img)
Nov 22 00:23:17 np0005531754 systemd[1]: Finished Ceph OSD losetup.
Nov 22 00:23:17 np0005531754 lvm[73653]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 00:23:17 np0005531754 lvm[73653]: VG ceph_vg1 finished
Nov 22 00:23:18 np0005531754 python3[73679]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 00:23:19 np0005531754 python3[73706]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:20 np0005531754 chronyd[59321]: Selected source 23.133.168.247 (pool.ntp.org)
Nov 22 00:23:20 np0005531754 python3[73732]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:20 np0005531754 kernel: loop5: detected capacity change from 0 to 41943040
Nov 22 00:23:20 np0005531754 python3[73764]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:20 np0005531754 lvm[73767]: PV /dev/loop5 not used.
Nov 22 00:23:20 np0005531754 lvm[73769]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 00:23:20 np0005531754 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 22 00:23:20 np0005531754 lvm[73773]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 22 00:23:20 np0005531754 lvm[73779]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 00:23:20 np0005531754 lvm[73779]: VG ceph_vg2 finished
Nov 22 00:23:20 np0005531754 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 22 00:23:21 np0005531754 python3[73859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:23:21 np0005531754 python3[73932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789001.0991242-36158-95671880934911/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:22 np0005531754 python3[73982]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:23:22 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:22 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:22 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:22 np0005531754 systemd[1]: Starting Ceph OSD losetup...
Nov 22 00:23:22 np0005531754 bash[74022]: /dev/loop5: [64513]:4328009 (/var/lib/ceph-osd-2.img)
Nov 22 00:23:22 np0005531754 systemd[1]: Finished Ceph OSD losetup.
Nov 22 00:23:22 np0005531754 lvm[74023]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 00:23:22 np0005531754 lvm[74023]: VG ceph_vg2 finished
Nov 22 00:23:24 np0005531754 python3[74047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:23:27 np0005531754 python3[74140]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 00:23:28 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:23:28 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:23:29 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:23:29 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:23:29 np0005531754 systemd[1]: run-r7cff77c6cee64418a4e62cfbf5f00f5d.service: Deactivated successfully.
Nov 22 00:23:29 np0005531754 python3[74252]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:29 np0005531754 python3[74280]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:30 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:30 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:30 np0005531754 python3[74343]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:31 np0005531754 python3[74369]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:31 np0005531754 python3[74447]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:23:32 np0005531754 python3[74520]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789011.6581194-36305-40649663552791/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:33 np0005531754 python3[74622]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:23:33 np0005531754 python3[74695]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789012.9001427-36323-152131388737324/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:23:34 np0005531754 python3[74745]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:34 np0005531754 python3[74773]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:34 np0005531754 python3[74801]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:23:35 np0005531754 python3[74829]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:23:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:35 np0005531754 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 00:23:35 np0005531754 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 00:23:35 np0005531754 systemd-logind[798]: New session 18 of user ceph-admin.
Nov 22 00:23:35 np0005531754 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 00:23:35 np0005531754 systemd[1]: Starting User Manager for UID 42477...
Nov 22 00:23:35 np0005531754 systemd[74849]: Queued start job for default target Main User Target.
Nov 22 00:23:35 np0005531754 systemd[74849]: Created slice User Application Slice.
Nov 22 00:23:35 np0005531754 systemd[74849]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 00:23:35 np0005531754 systemd[74849]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 00:23:35 np0005531754 systemd[74849]: Reached target Paths.
Nov 22 00:23:35 np0005531754 systemd[74849]: Reached target Timers.
Nov 22 00:23:35 np0005531754 systemd[74849]: Starting D-Bus User Message Bus Socket...
Nov 22 00:23:35 np0005531754 systemd[74849]: Starting Create User's Volatile Files and Directories...
Nov 22 00:23:35 np0005531754 systemd[74849]: Listening on D-Bus User Message Bus Socket.
Nov 22 00:23:35 np0005531754 systemd[74849]: Reached target Sockets.
Nov 22 00:23:35 np0005531754 systemd[74849]: Finished Create User's Volatile Files and Directories.
Nov 22 00:23:35 np0005531754 systemd[74849]: Reached target Basic System.
Nov 22 00:23:35 np0005531754 systemd[74849]: Reached target Main User Target.
Nov 22 00:23:35 np0005531754 systemd[74849]: Startup finished in 141ms.
Nov 22 00:23:35 np0005531754 systemd[1]: Started User Manager for UID 42477.
Nov 22 00:23:35 np0005531754 systemd[1]: Started Session 18 of User ceph-admin.
Nov 22 00:23:35 np0005531754 systemd[1]: session-18.scope: Deactivated successfully.
Nov 22 00:23:35 np0005531754 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Nov 22 00:23:35 np0005531754 systemd-logind[798]: Removed session 18.
Nov 22 00:23:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-compat4184782893-merged.mount: Deactivated successfully.
Nov 22 00:23:38 np0005531754 systemd[1]: var-lib-containers-storage-overlay-compat4184782893-lower\x2dmapped.mount: Deactivated successfully.
Nov 22 00:23:46 np0005531754 systemd[1]: Stopping User Manager for UID 42477...
Nov 22 00:23:46 np0005531754 systemd[74849]: Activating special unit Exit the Session...
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped target Main User Target.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped target Basic System.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped target Paths.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped target Sockets.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped target Timers.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 00:23:46 np0005531754 systemd[74849]: Closed D-Bus User Message Bus Socket.
Nov 22 00:23:46 np0005531754 systemd[74849]: Stopped Create User's Volatile Files and Directories.
Nov 22 00:23:46 np0005531754 systemd[74849]: Removed slice User Application Slice.
Nov 22 00:23:46 np0005531754 systemd[74849]: Reached target Shutdown.
Nov 22 00:23:46 np0005531754 systemd[74849]: Finished Exit the Session.
Nov 22 00:23:46 np0005531754 systemd[74849]: Reached target Exit the Session.
Nov 22 00:23:46 np0005531754 systemd[1]: user@42477.service: Deactivated successfully.
Nov 22 00:23:46 np0005531754 systemd[1]: Stopped User Manager for UID 42477.
Nov 22 00:23:46 np0005531754 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 22 00:23:46 np0005531754 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 22 00:23:46 np0005531754 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 22 00:23:46 np0005531754 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 22 00:23:46 np0005531754 systemd[1]: Removed slice User Slice of UID 42477.
Nov 22 00:23:49 np0005531754 podman[74903]: 2025-11-22 05:23:49.174211911 +0000 UTC m=+13.253894788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.269672366 +0000 UTC m=+0.061100633 container create 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 00:23:49 np0005531754 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 22 00:23:49 np0005531754 systemd[1]: Started libpod-conmon-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope.
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.247865777 +0000 UTC m=+0.039294064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.380348734 +0000 UTC m=+0.171777011 container init 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.389942467 +0000 UTC m=+0.181370714 container start 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.395855455 +0000 UTC m=+0.187283732 container attach 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:49 np0005531754 elastic_jennings[74982]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 00:23:49 np0005531754 systemd[1]: libpod-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope: Deactivated successfully.
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.695575391 +0000 UTC m=+0.487003728 container died 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ac60b2cce4f7281438b1cb37ef41d79f83436a951f4fc1757824bf4a0a9fac7d-merged.mount: Deactivated successfully.
Nov 22 00:23:49 np0005531754 podman[74966]: 2025-11-22 05:23:49.761760957 +0000 UTC m=+0.553189214 container remove 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:49 np0005531754 systemd[1]: libpod-conmon-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope: Deactivated successfully.
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.818853723 +0000 UTC m=+0.039105239 container create 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:49 np0005531754 systemd[1]: Started libpod-conmon-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope.
Nov 22 00:23:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.882259325 +0000 UTC m=+0.102510871 container init 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.887104015 +0000 UTC m=+0.107355581 container start 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:49 np0005531754 jolly_elion[75017]: 167 167
Nov 22 00:23:49 np0005531754 systemd[1]: libpod-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope: Deactivated successfully.
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.891047478 +0000 UTC m=+0.111299034 container attach 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.892147708 +0000 UTC m=+0.112399224 container died 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.800586008 +0000 UTC m=+0.020837554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:49 np0005531754 podman[75001]: 2025-11-22 05:23:49.934158323 +0000 UTC m=+0.154409839 container remove 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:23:49 np0005531754 systemd[1]: libpod-conmon-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.003821532 +0000 UTC m=+0.049733571 container create 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:50 np0005531754 systemd[1]: Started libpod-conmon-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope.
Nov 22 00:23:50 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.060492766 +0000 UTC m=+0.106404825 container init 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.067079161 +0000 UTC m=+0.112991210 container start 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.07117342 +0000 UTC m=+0.117085499 container attach 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:49.979662301 +0000 UTC m=+0.025574420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:50 np0005531754 brave_stonebraker[75049]: AQDmSCFpDHZwBRAA+iksjtMVNlBTWmFMf6R2mw==
Nov 22 00:23:50 np0005531754 systemd[1]: libpod-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.095529746 +0000 UTC m=+0.141441785 container died 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:23:50 np0005531754 podman[75032]: 2025-11-22 05:23:50.143827218 +0000 UTC m=+0.189739267 container remove 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:23:50 np0005531754 systemd[1]: libpod-conmon-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.231750702 +0000 UTC m=+0.057328522 container create 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:23:50 np0005531754 systemd[1]: Started libpod-conmon-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope.
Nov 22 00:23:50 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.203242975 +0000 UTC m=+0.028820855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.303418944 +0000 UTC m=+0.128996834 container init 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.308880169 +0000 UTC m=+0.134457989 container start 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.319125861 +0000 UTC m=+0.144703751 container attach 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:50 np0005531754 youthful_sammet[75084]: AQDmSCFpN9rQFBAA81ixeuAoaRYrVhuKeGII8A==
Nov 22 00:23:50 np0005531754 systemd[1]: libpod-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.353608277 +0000 UTC m=+0.179186067 container died 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:23:50 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4975a723cf2792aed3c5269f25176d3440d67c553b666721134a2716388d04b2-merged.mount: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75068]: 2025-11-22 05:23:50.402029361 +0000 UTC m=+0.227607151 container remove 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:23:50 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:50 np0005531754 systemd[1]: libpod-conmon-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope: Deactivated successfully.
Nov 22 00:23:50 np0005531754 podman[75103]: 2025-11-22 05:23:50.475906443 +0000 UTC m=+0.047975415 container create f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:23:50 np0005531754 systemd[1]: Started libpod-conmon-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope.
Nov 22 00:23:50 np0005531754 podman[75103]: 2025-11-22 05:23:50.455859931 +0000 UTC m=+0.027928943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:50 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:51 np0005531754 podman[75103]: 2025-11-22 05:23:51.369618704 +0000 UTC m=+0.941687696 container init f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:51 np0005531754 podman[75103]: 2025-11-22 05:23:51.380896724 +0000 UTC m=+0.952965696 container start f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:23:51 np0005531754 tender_mestorf[75120]: AQDnSCFpPvbiFxAAgpu0aaumWZdNcU2o+Pdsag==
Nov 22 00:23:51 np0005531754 systemd[1]: libpod-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope: Deactivated successfully.
Nov 22 00:23:51 np0005531754 podman[75103]: 2025-11-22 05:23:51.404560842 +0000 UTC m=+0.976629854 container attach f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:23:51 np0005531754 podman[75103]: 2025-11-22 05:23:51.4052389 +0000 UTC m=+0.977307872 container died f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:23:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e7efa087d475ec5b56ca15c9b4eafd20701a847556e5ac7be2eb5922a28ab5bc-merged.mount: Deactivated successfully.
Nov 22 00:23:51 np0005531754 podman[75103]: 2025-11-22 05:23:51.499720598 +0000 UTC m=+1.071789600 container remove f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:23:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:51 np0005531754 systemd[1]: libpod-conmon-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope: Deactivated successfully.
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.602817705 +0000 UTC m=+0.069925167 container create d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:23:51 np0005531754 systemd[1]: Started libpod-conmon-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope.
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.574351449 +0000 UTC m=+0.041458951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694d98e09c395c7b0e4a76ae0b1214c3eb1387f1b37929090f0c1bfa966aa9a4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.701181955 +0000 UTC m=+0.168289487 container init d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.710934554 +0000 UTC m=+0.178042026 container start d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.715448774 +0000 UTC m=+0.182556236 container attach d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:23:51 np0005531754 unruffled_morse[75157]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 22 00:23:51 np0005531754 unruffled_morse[75157]: setting min_mon_release = pacific
Nov 22 00:23:51 np0005531754 unruffled_morse[75157]: /usr/bin/monmaptool: set fsid to 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:51 np0005531754 unruffled_morse[75157]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 22 00:23:51 np0005531754 systemd[1]: libpod-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope: Deactivated successfully.
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.761184728 +0000 UTC m=+0.228292160 container died d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:51 np0005531754 podman[75141]: 2025-11-22 05:23:51.930881072 +0000 UTC m=+0.397988534 container remove d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:51 np0005531754 systemd[1]: libpod-conmon-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope: Deactivated successfully.
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.001218629 +0000 UTC m=+0.043420413 container create ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:23:52 np0005531754 systemd[1]: Started libpod-conmon-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope.
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:51.981703042 +0000 UTC m=+0.023904846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:52 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.1225569 +0000 UTC m=+0.164758774 container init ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.131856717 +0000 UTC m=+0.174058541 container start ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.135789242 +0000 UTC m=+0.177991106 container attach ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:23:52 np0005531754 systemd[1]: libpod-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope: Deactivated successfully.
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.225004449 +0000 UTC m=+0.267206263 container died ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:23:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay-eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253-merged.mount: Deactivated successfully.
Nov 22 00:23:52 np0005531754 podman[75176]: 2025-11-22 05:23:52.268113124 +0000 UTC m=+0.310314928 container remove ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:23:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:52 np0005531754 systemd[1]: libpod-conmon-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope: Deactivated successfully.
Nov 22 00:23:52 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:52 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:52 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:52 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:52 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:52 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:52 np0005531754 systemd[1]: Reached target All Ceph clusters and services.
Nov 22 00:23:52 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:52 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:52 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:53 np0005531754 systemd[1]: Reached target Ceph cluster 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:53 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:53 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:53 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:53 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:53 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:53 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:53 np0005531754 systemd[1]: Created slice Slice /system/ceph-13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:53 np0005531754 systemd[1]: Reached target System Time Set.
Nov 22 00:23:53 np0005531754 systemd[1]: Reached target System Time Synchronized.
Nov 22 00:23:53 np0005531754 systemd[1]: Starting Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:23:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:53 np0005531754 podman[75472]: 2025-11-22 05:23:53.884148418 +0000 UTC m=+0.056845549 container create 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:23:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:53 np0005531754 podman[75472]: 2025-11-22 05:23:53.85555938 +0000 UTC m=+0.028256571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:53 np0005531754 podman[75472]: 2025-11-22 05:23:53.970209133 +0000 UTC m=+0.142906254 container init 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:53 np0005531754 podman[75472]: 2025-11-22 05:23:53.977291531 +0000 UTC m=+0.149988642 container start 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:23:53 np0005531754 bash[75472]: 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5
Nov 22 00:23:53 np0005531754 systemd[1]: Started Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: pidfile_write: ignore empty --pid-file
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: load: jerasure load: lrc 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Git sha 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: DB SUMMARY
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: DB Session ID:  3Q880ZQ6T64W7W0R1Q28
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                                     Options.env: 0x559676215c40
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                                Options.info_log: 0x55967801ee80
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                                 Options.wal_dir: 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                    Options.write_buffer_manager: 0x55967802eb40
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                               Options.row_cache: None
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                              Options.wal_filter: None
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.wal_compression: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.max_background_jobs: 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Compression algorithms supported:
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kZSTD supported: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:           Options.merge_operator: 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:        Options.compaction_filter: None
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55967801ea80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5596780171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.compression: NoCompression
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.num_levels: 7
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4e45ab2-4273-47c3-96b1-648e5316c944
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034027276, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034029681, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "3Q880ZQ6T64W7W0R1Q28", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034029791, "job": 1, "event": "recovery_finished"}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559678040e00
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: DB pointer 0x5596780ca000
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5596780171f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@-1(???) e0 preinit fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-22T05:23:52.169278Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.063129439 +0000 UTC m=+0.048734025 container create 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).mds e1 new map
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mkfs 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:54 np0005531754 systemd[1]: Started libpod-conmon-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope.
Nov 22 00:23:54 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.042103051 +0000 UTC m=+0.027707617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.1633895 +0000 UTC m=+0.148994076 container init 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.170888769 +0000 UTC m=+0.156493325 container start 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.174433414 +0000 UTC m=+0.160038010 container attach 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 00:23:54 np0005531754 ceph-mon[75491]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866553743' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:  cluster:
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    id:     13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    health: HEALTH_OK
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]: 
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:  services:
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    mon: 1 daemons, quorum compute-0 (age 0.490728s)
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    mgr: no daemons active
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    osd: 0 osds: 0 up, 0 in
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]: 
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:  data:
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    pools:   0 pools, 0 pgs
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    objects: 0 objects, 0 B
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    usage:   0 B used, 0 B / 0 B avail
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]:    pgs:     
Nov 22 00:23:54 np0005531754 funny_goldberg[75545]: 
Nov 22 00:23:54 np0005531754 systemd[1]: libpod-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope: Deactivated successfully.
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.568995746 +0000 UTC m=+0.554600342 container died 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:54 np0005531754 podman[75492]: 2025-11-22 05:23:54.631277179 +0000 UTC m=+0.616881765 container remove 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:23:54 np0005531754 systemd[1]: libpod-conmon-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope: Deactivated successfully.
Nov 22 00:23:54 np0005531754 podman[75583]: 2025-11-22 05:23:54.723236321 +0000 UTC m=+0.058288009 container create 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:23:54 np0005531754 systemd[1]: Started libpod-conmon-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope.
Nov 22 00:23:54 np0005531754 podman[75583]: 2025-11-22 05:23:54.696055259 +0000 UTC m=+0.031106967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:54 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:54 np0005531754 podman[75583]: 2025-11-22 05:23:54.823556663 +0000 UTC m=+0.158608321 container init 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:23:54 np0005531754 podman[75583]: 2025-11-22 05:23:54.837890793 +0000 UTC m=+0.172942491 container start 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:54 np0005531754 podman[75583]: 2025-11-22 05:23:54.842550297 +0000 UTC m=+0.177601975 container attach 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 00:23:55 np0005531754 charming_matsumoto[75599]: 
Nov 22 00:23:55 np0005531754 charming_matsumoto[75599]: [global]
Nov 22 00:23:55 np0005531754 charming_matsumoto[75599]: #011fsid = 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:55 np0005531754 charming_matsumoto[75599]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 22 00:23:55 np0005531754 charming_matsumoto[75599]: #011osd_crush_chooseleaf_type = 0
Nov 22 00:23:55 np0005531754 systemd[1]: libpod-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope: Deactivated successfully.
Nov 22 00:23:55 np0005531754 podman[75583]: 2025-11-22 05:23:55.253036043 +0000 UTC m=+0.588087741 container died 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e-merged.mount: Deactivated successfully.
Nov 22 00:23:55 np0005531754 podman[75583]: 2025-11-22 05:23:55.32754535 +0000 UTC m=+0.662597048 container remove 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:55 np0005531754 systemd[1]: libpod-conmon-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope: Deactivated successfully.
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.427027181 +0000 UTC m=+0.069476205 container create 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:55 np0005531754 systemd[1]: Started libpod-conmon-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope.
Nov 22 00:23:55 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.396663265 +0000 UTC m=+0.039112369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.510065065 +0000 UTC m=+0.152514189 container init 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.519577267 +0000 UTC m=+0.162026321 container start 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.524000255 +0000 UTC m=+0.166449309 container attach 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:23:55 np0005531754 ceph-mon[75491]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129884442' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:23:55 np0005531754 systemd[1]: libpod-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope: Deactivated successfully.
Nov 22 00:23:55 np0005531754 podman[75639]: 2025-11-22 05:23:55.960936333 +0000 UTC m=+0.603385387 container died 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752-merged.mount: Deactivated successfully.
Nov 22 00:23:56 np0005531754 podman[75639]: 2025-11-22 05:23:56.006610285 +0000 UTC m=+0.649059299 container remove 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:56 np0005531754 systemd[1]: libpod-conmon-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope: Deactivated successfully.
Nov 22 00:23:56 np0005531754 systemd[1]: Stopping Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: mon.compute-0@0(leader) e1 shutdown
Nov 22 00:23:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75487]: 2025-11-22T05:23:56.210+0000 7f75ef9a3640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 00:23:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75487]: 2025-11-22T05:23:56.210+0000 7f75ef9a3640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 00:23:56 np0005531754 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 00:23:56 np0005531754 podman[75720]: 2025-11-22 05:23:56.302899529 +0000 UTC m=+0.124096605 container died 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 00:23:56 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c-merged.mount: Deactivated successfully.
Nov 22 00:23:56 np0005531754 podman[75720]: 2025-11-22 05:23:56.345308485 +0000 UTC m=+0.166505551 container remove 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:23:56 np0005531754 bash[75720]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0
Nov 22 00:23:56 np0005531754 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 00:23:56 np0005531754 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0.service: Deactivated successfully.
Nov 22 00:23:56 np0005531754 systemd[1]: Stopped Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:56 np0005531754 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0.service: Consumed 1.094s CPU time.
Nov 22 00:23:56 np0005531754 systemd[1]: Starting Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:23:56 np0005531754 podman[75821]: 2025-11-22 05:23:56.831314705 +0000 UTC m=+0.069314211 container create d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:23:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:56 np0005531754 podman[75821]: 2025-11-22 05:23:56.898233001 +0000 UTC m=+0.136232517 container init d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:56 np0005531754 podman[75821]: 2025-11-22 05:23:56.805803868 +0000 UTC m=+0.043803414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:56 np0005531754 podman[75821]: 2025-11-22 05:23:56.903582433 +0000 UTC m=+0.141581919 container start d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:23:56 np0005531754 bash[75821]: d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107
Nov 22 00:23:56 np0005531754 systemd[1]: Started Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: pidfile_write: ignore empty --pid-file
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: load: jerasure load: lrc 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Git sha 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: DB SUMMARY
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: DB Session ID:  OCOOLGAJEIQ903CUBBA6
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55672 ; 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                                     Options.env: 0x55fdf8ffbc40
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                                Options.info_log: 0x55fdfafd1040
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                                 Options.wal_dir: 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                    Options.write_buffer_manager: 0x55fdfafe0b40
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                               Options.row_cache: None
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                              Options.wal_filter: None
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.wal_compression: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.max_background_jobs: 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Compression algorithms supported:
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kZSTD supported: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:           Options.merge_operator: 
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:        Options.compaction_filter: None
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fdfafd0c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fdfafc91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.compression: NoCompression
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.num_levels: 7
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4e45ab2-4273-47c3-96b1-648e5316c944
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036974266, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036984447, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53793, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51382, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789036, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036984636, "job": 1, "event": "recovery_finished"}
Nov 22 00:23:56 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fdfaff2e00
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: rocksdb: DB pointer 0x55fdfb07c000
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???) e1 preinit fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).mds e1 new map
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.037437176 +0000 UTC m=+0.072843875 container create 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 00:23:57 np0005531754 systemd[1]: Started libpod-conmon-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope.
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:56.995786951 +0000 UTC m=+0.031193690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.152815498 +0000 UTC m=+0.188222287 container init 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.160813451 +0000 UTC m=+0.196220190 container start 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.164881758 +0000 UTC m=+0.200288487 container attach 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:23:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 22 00:23:57 np0005531754 systemd[1]: libpod-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope: Deactivated successfully.
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.601931679 +0000 UTC m=+0.637338368 container died 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2-merged.mount: Deactivated successfully.
Nov 22 00:23:57 np0005531754 podman[75841]: 2025-11-22 05:23:57.655388188 +0000 UTC m=+0.690794887 container remove 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:23:57 np0005531754 systemd[1]: libpod-conmon-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope: Deactivated successfully.
Nov 22 00:23:57 np0005531754 podman[75935]: 2025-11-22 05:23:57.712904275 +0000 UTC m=+0.036190622 container create ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:23:57 np0005531754 systemd[1]: Started libpod-conmon-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope.
Nov 22 00:23:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:57 np0005531754 podman[75935]: 2025-11-22 05:23:57.786999831 +0000 UTC m=+0.110286188 container init ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:23:57 np0005531754 podman[75935]: 2025-11-22 05:23:57.698326837 +0000 UTC m=+0.021613204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:57 np0005531754 podman[75935]: 2025-11-22 05:23:57.79785924 +0000 UTC m=+0.121145597 container start ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:23:57 np0005531754 podman[75935]: 2025-11-22 05:23:57.802201515 +0000 UTC m=+0.125487872 container attach ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:23:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 22 00:23:58 np0005531754 systemd[1]: libpod-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope: Deactivated successfully.
Nov 22 00:23:58 np0005531754 podman[75935]: 2025-11-22 05:23:58.236517743 +0000 UTC m=+0.559804130 container died ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:23:58 np0005531754 systemd[1]: var-lib-containers-storage-overlay-aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480-merged.mount: Deactivated successfully.
Nov 22 00:23:58 np0005531754 podman[75935]: 2025-11-22 05:23:58.293280659 +0000 UTC m=+0.616566996 container remove ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:23:58 np0005531754 systemd[1]: libpod-conmon-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope: Deactivated successfully.
Nov 22 00:23:58 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:58 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:58 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:58 np0005531754 systemd[1]: Reloading.
Nov 22 00:23:58 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:23:58 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:23:58 np0005531754 systemd[1]: Starting Ceph mgr.compute-0.mscchl for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:23:59 np0005531754 podman[76114]: 2025-11-22 05:23:59.158960097 +0000 UTC m=+0.056680926 container create 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:23:59 np0005531754 podman[76114]: 2025-11-22 05:23:59.12894877 +0000 UTC m=+0.026669689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/lib/ceph/mgr/ceph-compute-0.mscchl supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 podman[76114]: 2025-11-22 05:23:59.248770421 +0000 UTC m=+0.146491290 container init 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:23:59 np0005531754 podman[76114]: 2025-11-22 05:23:59.258971381 +0000 UTC m=+0.156692250 container start 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 00:23:59 np0005531754 bash[76114]: 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f
Nov 22 00:23:59 np0005531754 systemd[1]: Started Ceph mgr.compute-0.mscchl for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.366823564 +0000 UTC m=+0.054633741 container create 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:23:59 np0005531754 systemd[1]: Started libpod-conmon-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope.
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.340926387 +0000 UTC m=+0.028736544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Nov 22 00:23:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.487156938 +0000 UTC m=+0.174967145 container init 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.500976915 +0000 UTC m=+0.188787082 container start 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.505545196 +0000 UTC m=+0.193355373 container attach 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Nov 22 00:23:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:23:59.737+0000 7f82420f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:23:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:23:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983410945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:23:59 np0005531754 funny_haibt[76175]: 
Nov 22 00:23:59 np0005531754 funny_haibt[76175]: {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "health": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "status": "HEALTH_OK",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "checks": {},
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "mutes": []
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "election_epoch": 5,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "quorum": [
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        0
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    ],
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "quorum_names": [
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "compute-0"
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    ],
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "quorum_age": 2,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "monmap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "epoch": 1,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "min_mon_release_name": "reef",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_mons": 1
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "osdmap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "epoch": 1,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_osds": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_up_osds": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "osd_up_since": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_in_osds": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "osd_in_since": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_remapped_pgs": 0
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "pgmap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "pgs_by_state": [],
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_pgs": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_pools": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_objects": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "data_bytes": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "bytes_used": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "bytes_avail": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "bytes_total": 0
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "fsmap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "epoch": 1,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "by_rank": [],
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "up:standby": 0
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "mgrmap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "available": false,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "num_standbys": 0,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "modules": [
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:            "iostat",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:            "nfs",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:            "restful"
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        ],
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "services": {}
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "servicemap": {
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "epoch": 1,
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:        "services": {}
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    },
Nov 22 00:23:59 np0005531754 funny_haibt[76175]:    "progress_events": {}
Nov 22 00:23:59 np0005531754 funny_haibt[76175]: }
Nov 22 00:23:59 np0005531754 systemd[1]: libpod-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope: Deactivated successfully.
Nov 22 00:23:59 np0005531754 podman[76135]: 2025-11-22 05:23:59.961459757 +0000 UTC m=+0.649269924 container died 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:23:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:23:59.976+0000 7f82420f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:23:59 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Nov 22 00:23:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1-merged.mount: Deactivated successfully.
Nov 22 00:24:00 np0005531754 podman[76135]: 2025-11-22 05:24:00.007781957 +0000 UTC m=+0.695592084 container remove 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:24:00 np0005531754 systemd[1]: libpod-conmon-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope: Deactivated successfully.
Nov 22 00:24:01 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.11245335 +0000 UTC m=+0.060020264 container create 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:02 np0005531754 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:24:02 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Nov 22 00:24:02 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:02.130+0000 7f82420f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:24:02 np0005531754 systemd[1]: Started libpod-conmon-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope.
Nov 22 00:24:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.085016551 +0000 UTC m=+0.032583566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.187977055 +0000 UTC m=+0.135544009 container init 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.194339954 +0000 UTC m=+0.141906908 container start 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.198259097 +0000 UTC m=+0.145826061 container attach 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:24:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621867992' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:02 np0005531754 nice_haibt[76240]: 
Nov 22 00:24:02 np0005531754 nice_haibt[76240]: {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "health": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "status": "HEALTH_OK",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "checks": {},
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "mutes": []
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "election_epoch": 5,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "quorum": [
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        0
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    ],
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "quorum_names": [
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "compute-0"
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    ],
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "quorum_age": 5,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "monmap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "epoch": 1,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "min_mon_release_name": "reef",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_mons": 1
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "osdmap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "epoch": 1,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_osds": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_up_osds": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "osd_up_since": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_in_osds": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "osd_in_since": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_remapped_pgs": 0
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "pgmap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "pgs_by_state": [],
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_pgs": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_pools": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_objects": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "data_bytes": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "bytes_used": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "bytes_avail": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "bytes_total": 0
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "fsmap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "epoch": 1,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "by_rank": [],
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "up:standby": 0
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "mgrmap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "available": false,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "num_standbys": 0,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "modules": [
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:            "iostat",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:            "nfs",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:            "restful"
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        ],
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "services": {}
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "servicemap": {
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "epoch": 1,
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:        "services": {}
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    },
Nov 22 00:24:02 np0005531754 nice_haibt[76240]:    "progress_events": {}
Nov 22 00:24:02 np0005531754 nice_haibt[76240]: }
Nov 22 00:24:02 np0005531754 systemd[1]: libpod-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope: Deactivated successfully.
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.588698671 +0000 UTC m=+0.536265585 container died 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:24:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2-merged.mount: Deactivated successfully.
Nov 22 00:24:02 np0005531754 podman[76224]: 2025-11-22 05:24:02.63238543 +0000 UTC m=+0.579952354 container remove 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:24:02 np0005531754 systemd[1]: libpod-conmon-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope: Deactivated successfully.
Nov 22 00:24:03 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Nov 22 00:24:03 np0005531754 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:24:03 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 00:24:03 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:03.734+0000 7f82420f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]:  from numpy import show_config as show_numpy_config
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.269+0000 7f82420f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.511+0000 7f82420f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 podman[76277]: 2025-11-22 05:24:04.733504691 +0000 UTC m=+0.070195534 container create fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Nov 22 00:24:04 np0005531754 systemd[1]: Started libpod-conmon-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope.
Nov 22 00:24:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:04 np0005531754 podman[76277]: 2025-11-22 05:24:04.707240124 +0000 UTC m=+0.043931017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:04 np0005531754 podman[76277]: 2025-11-22 05:24:04.812855457 +0000 UTC m=+0.149546310 container init fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:04 np0005531754 podman[76277]: 2025-11-22 05:24:04.827843205 +0000 UTC m=+0.164534048 container start fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:04 np0005531754 podman[76277]: 2025-11-22 05:24:04.832723005 +0000 UTC m=+0.169413868 container attach fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 00:24:04 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Nov 22 00:24:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.985+0000 7f82420f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 00:24:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422140615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]: 
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]: {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "health": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "status": "HEALTH_OK",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "checks": {},
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "mutes": []
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "election_epoch": 5,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "quorum": [
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        0
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    ],
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "quorum_names": [
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "compute-0"
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    ],
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "quorum_age": 8,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "monmap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "epoch": 1,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "min_mon_release_name": "reef",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_mons": 1
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "osdmap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "epoch": 1,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_osds": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_up_osds": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "osd_up_since": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_in_osds": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "osd_in_since": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_remapped_pgs": 0
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "pgmap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "pgs_by_state": [],
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_pgs": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_pools": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_objects": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "data_bytes": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "bytes_used": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "bytes_avail": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "bytes_total": 0
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "fsmap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "epoch": 1,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "by_rank": [],
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "up:standby": 0
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "mgrmap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "available": false,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "num_standbys": 0,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "modules": [
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:            "iostat",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:            "nfs",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:            "restful"
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        ],
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "services": {}
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "servicemap": {
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "epoch": 1,
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:        "services": {}
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    },
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]:    "progress_events": {}
Nov 22 00:24:05 np0005531754 infallible_poincare[76294]: }
Nov 22 00:24:05 np0005531754 systemd[1]: libpod-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope: Deactivated successfully.
Nov 22 00:24:05 np0005531754 podman[76277]: 2025-11-22 05:24:05.254424098 +0000 UTC m=+0.591114911 container died fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:24:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0-merged.mount: Deactivated successfully.
Nov 22 00:24:05 np0005531754 podman[76277]: 2025-11-22 05:24:05.314177494 +0000 UTC m=+0.650868317 container remove fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:05 np0005531754 systemd[1]: libpod-conmon-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope: Deactivated successfully.
Nov 22 00:24:06 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Nov 22 00:24:06 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.403575353 +0000 UTC m=+0.063275211 container create 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:24:07 np0005531754 systemd[1]: Started libpod-conmon-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope.
Nov 22 00:24:07 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.386724865 +0000 UTC m=+0.046424753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.503716691 +0000 UTC m=+0.163416649 container init 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.511635131 +0000 UTC m=+0.171335029 container start 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.51577145 +0000 UTC m=+0.175471348 container attach 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:24:07 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Nov 22 00:24:07 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Nov 22 00:24:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480979014' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:07 np0005531754 busy_hoover[76347]: 
Nov 22 00:24:07 np0005531754 busy_hoover[76347]: {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "health": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "status": "HEALTH_OK",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "checks": {},
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "mutes": []
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "election_epoch": 5,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "quorum": [
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        0
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    ],
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "quorum_names": [
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "compute-0"
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    ],
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "quorum_age": 10,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "monmap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "epoch": 1,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "min_mon_release_name": "reef",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_mons": 1
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "osdmap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "epoch": 1,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_osds": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_up_osds": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "osd_up_since": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_in_osds": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "osd_in_since": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_remapped_pgs": 0
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "pgmap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "pgs_by_state": [],
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_pgs": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_pools": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_objects": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "data_bytes": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "bytes_used": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "bytes_avail": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "bytes_total": 0
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "fsmap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "epoch": 1,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "by_rank": [],
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "up:standby": 0
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "mgrmap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "available": false,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "num_standbys": 0,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "modules": [
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:            "iostat",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:            "nfs",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:            "restful"
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        ],
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "services": {}
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "servicemap": {
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "epoch": 1,
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:        "services": {}
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    },
Nov 22 00:24:07 np0005531754 busy_hoover[76347]:    "progress_events": {}
Nov 22 00:24:07 np0005531754 busy_hoover[76347]: }
Nov 22 00:24:07 np0005531754 systemd[1]: libpod-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope: Deactivated successfully.
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.912406468 +0000 UTC m=+0.572106336 container died 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10-merged.mount: Deactivated successfully.
Nov 22 00:24:07 np0005531754 podman[76331]: 2025-11-22 05:24:07.984055401 +0000 UTC m=+0.643755259 container remove 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:24:07 np0005531754 systemd[1]: libpod-conmon-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope: Deactivated successfully.
Nov 22 00:24:08 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:08.668+0000 7f82420f8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 00:24:08 np0005531754 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 00:24:08 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Nov 22 00:24:09 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.353+0000 7f82420f8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 00:24:09 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.623+0000 7f82420f8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Nov 22 00:24:09 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.864+0000 7f82420f8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 00:24:09 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 00:24:10 np0005531754 podman[76384]: 2025-11-22 05:24:10.047726427 +0000 UTC m=+0.039552072 container create a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:24:10 np0005531754 systemd[1]: Started libpod-conmon-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope.
Nov 22 00:24:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:10 np0005531754 podman[76384]: 2025-11-22 05:24:10.032705408 +0000 UTC m=+0.024531083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:10 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:10.141+0000 7f82420f8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 00:24:10 np0005531754 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 00:24:10 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Nov 22 00:24:10 np0005531754 podman[76384]: 2025-11-22 05:24:10.145528532 +0000 UTC m=+0.137354227 container init a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:24:10 np0005531754 podman[76384]: 2025-11-22 05:24:10.152836577 +0000 UTC m=+0.144662242 container start a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:24:10 np0005531754 podman[76384]: 2025-11-22 05:24:10.156457322 +0000 UTC m=+0.148283017 container attach a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:24:10 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:10.361+0000 7f82420f8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 00:24:10 np0005531754 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 00:24:10 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Nov 22 00:24:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141882630' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:10 np0005531754 loving_swanson[76400]: 
Nov 22 00:24:10 np0005531754 loving_swanson[76400]: {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "health": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "status": "HEALTH_OK",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "checks": {},
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "mutes": []
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "election_epoch": 5,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "quorum": [
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        0
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    ],
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "quorum_names": [
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "compute-0"
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    ],
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "quorum_age": 13,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "monmap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "epoch": 1,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "min_mon_release_name": "reef",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_mons": 1
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "osdmap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "epoch": 1,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_osds": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_up_osds": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "osd_up_since": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_in_osds": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "osd_in_since": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_remapped_pgs": 0
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "pgmap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "pgs_by_state": [],
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_pgs": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_pools": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_objects": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "data_bytes": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "bytes_used": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "bytes_avail": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "bytes_total": 0
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "fsmap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "epoch": 1,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "by_rank": [],
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "up:standby": 0
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "mgrmap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "available": false,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "num_standbys": 0,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "modules": [
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:            "iostat",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:            "nfs",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:            "restful"
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        ],
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "services": {}
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "servicemap": {
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "epoch": 1,
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:        "services": {}
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    },
Nov 22 00:24:10 np0005531754 loving_swanson[76400]:    "progress_events": {}
Nov 22 00:24:10 np0005531754 loving_swanson[76400]: }
Nov 22 00:24:10 np0005531754 systemd[1]: libpod-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope: Deactivated successfully.
Nov 22 00:24:10 np0005531754 podman[76426]: 2025-11-22 05:24:10.618679041 +0000 UTC m=+0.040896927 container died a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d-merged.mount: Deactivated successfully.
Nov 22 00:24:10 np0005531754 podman[76426]: 2025-11-22 05:24:10.666061489 +0000 UTC m=+0.088279365 container remove a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:10 np0005531754 systemd[1]: libpod-conmon-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope: Deactivated successfully.
Nov 22 00:24:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:11.380+0000 7f82420f8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 00:24:11 np0005531754 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 00:24:11 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Nov 22 00:24:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:11.702+0000 7f82420f8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 00:24:11 np0005531754 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 00:24:11 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Nov 22 00:24:12 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Nov 22 00:24:12 np0005531754 podman[76441]: 2025-11-22 05:24:12.760008549 +0000 UTC m=+0.056084900 container create 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:24:12 np0005531754 systemd[1]: Started libpod-conmon-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope.
Nov 22 00:24:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:12 np0005531754 podman[76441]: 2025-11-22 05:24:12.734104811 +0000 UTC m=+0.030181152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:12 np0005531754 podman[76441]: 2025-11-22 05:24:12.860857986 +0000 UTC m=+0.156934327 container init 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:24:12 np0005531754 podman[76441]: 2025-11-22 05:24:12.8704232 +0000 UTC m=+0.166499511 container start 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:12 np0005531754 podman[76441]: 2025-11-22 05:24:12.874176489 +0000 UTC m=+0.170252820 container attach 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:13 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:13.086+0000 7f82420f8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 00:24:13 np0005531754 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 00:24:13 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Nov 22 00:24:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315193514' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:13 np0005531754 zen_borg[76457]: 
Nov 22 00:24:13 np0005531754 zen_borg[76457]: {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "health": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "status": "HEALTH_OK",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "checks": {},
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "mutes": []
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "election_epoch": 5,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "quorum": [
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        0
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    ],
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "quorum_names": [
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "compute-0"
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    ],
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "quorum_age": 16,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "monmap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "epoch": 1,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "min_mon_release_name": "reef",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_mons": 1
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "osdmap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "epoch": 1,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_osds": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_up_osds": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "osd_up_since": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_in_osds": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "osd_in_since": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_remapped_pgs": 0
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "pgmap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "pgs_by_state": [],
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_pgs": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_pools": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_objects": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "data_bytes": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "bytes_used": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "bytes_avail": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "bytes_total": 0
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "fsmap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "epoch": 1,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "by_rank": [],
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "up:standby": 0
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "mgrmap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "available": false,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "num_standbys": 0,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "modules": [
Nov 22 00:24:13 np0005531754 zen_borg[76457]:            "iostat",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:            "nfs",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:            "restful"
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        ],
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "services": {}
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "servicemap": {
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "epoch": 1,
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:13 np0005531754 zen_borg[76457]:        "services": {}
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    },
Nov 22 00:24:13 np0005531754 zen_borg[76457]:    "progress_events": {}
Nov 22 00:24:13 np0005531754 zen_borg[76457]: }
Nov 22 00:24:13 np0005531754 systemd[1]: libpod-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope: Deactivated successfully.
Nov 22 00:24:13 np0005531754 podman[76483]: 2025-11-22 05:24:13.303756731 +0000 UTC m=+0.021605954 container died 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc-merged.mount: Deactivated successfully.
Nov 22 00:24:13 np0005531754 podman[76483]: 2025-11-22 05:24:13.356257525 +0000 UTC m=+0.074106728 container remove 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:13 np0005531754 systemd[1]: libpod-conmon-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope: Deactivated successfully.
Nov 22 00:24:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.124+0000 7f82420f8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Nov 22 00:24:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.375+0000 7f82420f8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.431400206 +0000 UTC m=+0.039957012 container create 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:15 np0005531754 systemd[1]: Started libpod-conmon-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope.
Nov 22 00:24:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.506381335 +0000 UTC m=+0.114938201 container init 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.413318836 +0000 UTC m=+0.021875672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.511126212 +0000 UTC m=+0.119683028 container start 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.514379178 +0000 UTC m=+0.122935994 container attach 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.622+0000 7f82420f8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Nov 22 00:24:15 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Nov 22 00:24:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320238895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]: 
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]: {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "health": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "status": "HEALTH_OK",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "checks": {},
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "mutes": []
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "election_epoch": 5,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "quorum": [
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        0
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    ],
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "quorum_names": [
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "compute-0"
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    ],
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "quorum_age": 18,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "monmap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "epoch": 1,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "min_mon_release_name": "reef",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_mons": 1
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "osdmap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "epoch": 1,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_osds": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_up_osds": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "osd_up_since": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_in_osds": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "osd_in_since": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_remapped_pgs": 0
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "pgmap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "pgs_by_state": [],
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_pgs": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_pools": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_objects": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "data_bytes": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "bytes_used": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "bytes_avail": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "bytes_total": 0
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "fsmap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "epoch": 1,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "by_rank": [],
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "up:standby": 0
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "mgrmap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "available": false,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "num_standbys": 0,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "modules": [
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:            "iostat",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:            "nfs",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:            "restful"
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        ],
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "services": {}
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "servicemap": {
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "epoch": 1,
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:        "services": {}
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    },
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]:    "progress_events": {}
Nov 22 00:24:15 np0005531754 stoic_rubin[76515]: }
Nov 22 00:24:15 np0005531754 systemd[1]: libpod-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope: Deactivated successfully.
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.882238032 +0000 UTC m=+0.490794838 container died 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3-merged.mount: Deactivated successfully.
Nov 22 00:24:15 np0005531754 podman[76498]: 2025-11-22 05:24:15.925307595 +0000 UTC m=+0.533864391 container remove 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:15 np0005531754 systemd[1]: libpod-conmon-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope: Deactivated successfully.
Nov 22 00:24:16 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.116+0000 7f82420f8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Nov 22 00:24:16 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.348+0000 7f82420f8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Nov 22 00:24:16 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.943+0000 7f82420f8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 00:24:16 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 00:24:17 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:17.639+0000 7f82420f8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:17 np0005531754 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:17 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.00551651 +0000 UTC m=+0.043934577 container create af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:24:18 np0005531754 systemd[1]: Started libpod-conmon-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope.
Nov 22 00:24:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.08611299 +0000 UTC m=+0.124531117 container init af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:17.991764085 +0000 UTC m=+0.030182172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.094455871 +0000 UTC m=+0.132873948 container start af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.098738945 +0000 UTC m=+0.137157042 container attach af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:18.337+0000 7f82420f8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962159204' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]: 
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]: {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "health": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "status": "HEALTH_OK",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "checks": {},
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "mutes": []
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "election_epoch": 5,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "quorum": [
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        0
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    ],
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "quorum_names": [
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "compute-0"
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    ],
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "quorum_age": 21,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "monmap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "epoch": 1,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "min_mon_release_name": "reef",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_mons": 1
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "osdmap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "epoch": 1,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_osds": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_up_osds": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "osd_up_since": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_in_osds": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "osd_in_since": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_remapped_pgs": 0
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "pgmap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "pgs_by_state": [],
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_pgs": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_pools": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_objects": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "data_bytes": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "bytes_used": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "bytes_avail": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "bytes_total": 0
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "fsmap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "epoch": 1,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "by_rank": [],
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "up:standby": 0
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "mgrmap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "available": false,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "num_standbys": 0,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "modules": [
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:            "iostat",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:            "nfs",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:            "restful"
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        ],
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "services": {}
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "servicemap": {
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "epoch": 1,
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:        "services": {}
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    },
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]:    "progress_events": {}
Nov 22 00:24:18 np0005531754 affectionate_ritchie[76571]: }
Nov 22 00:24:18 np0005531754 systemd[1]: libpod-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope: Deactivated successfully.
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.461828942 +0000 UTC m=+0.500247089 container died af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:24:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79-merged.mount: Deactivated successfully.
Nov 22 00:24:18 np0005531754 podman[76555]: 2025-11-22 05:24:18.518370302 +0000 UTC m=+0.556788399 container remove af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:18 np0005531754 systemd[1]: libpod-conmon-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope: Deactivated successfully.
Nov 22 00:24:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:18.574+0000 7f82420f8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x555a6a2231e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mscchl
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mscchl(active, starting, since 0.0139237s)
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer INFO root] Starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: crash
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Manager daemon compute-0.mscchl is now available
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:24:18
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: progress
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [progress INFO root] Loading...
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [progress INFO root] No stored events to load
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: restful
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: status
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"} v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:18 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 22 00:24:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: Activating manager daemon compute-0.mscchl
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: Manager daemon compute-0.mscchl is now available
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:19 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mscchl(active, since 1.0286s)
Nov 22 00:24:20 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:20 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mscchl(active, since 2s)
Nov 22 00:24:20 np0005531754 podman[76688]: 2025-11-22 05:24:20.63108057 +0000 UTC m=+0.076842351 container create be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:24:20 np0005531754 systemd[1]: Started libpod-conmon-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope.
Nov 22 00:24:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:20 np0005531754 podman[76688]: 2025-11-22 05:24:20.60242106 +0000 UTC m=+0.048182911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:20 np0005531754 podman[76688]: 2025-11-22 05:24:20.713548989 +0000 UTC m=+0.159310780 container init be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:20 np0005531754 podman[76688]: 2025-11-22 05:24:20.723565315 +0000 UTC m=+0.169327106 container start be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:24:20 np0005531754 podman[76688]: 2025-11-22 05:24:20.7279042 +0000 UTC m=+0.173666031 container attach be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:24:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 00:24:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870129946' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]: 
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]: {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "health": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "status": "HEALTH_OK",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "checks": {},
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "mutes": []
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "election_epoch": 5,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "quorum": [
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        0
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    ],
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "quorum_names": [
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "compute-0"
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    ],
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "quorum_age": 24,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "monmap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "epoch": 1,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "min_mon_release_name": "reef",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_mons": 1
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "osdmap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "epoch": 1,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_osds": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_up_osds": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "osd_up_since": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_in_osds": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "osd_in_since": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_remapped_pgs": 0
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "pgmap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "pgs_by_state": [],
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_pgs": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_pools": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_objects": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "data_bytes": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "bytes_used": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "bytes_avail": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "bytes_total": 0
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "fsmap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "epoch": 1,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "by_rank": [],
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "up:standby": 0
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "mgrmap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "available": true,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "num_standbys": 0,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "modules": [
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:            "iostat",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:            "nfs",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:            "restful"
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        ],
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "services": {}
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "servicemap": {
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "epoch": 1,
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:        "services": {}
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    },
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]:    "progress_events": {}
Nov 22 00:24:21 np0005531754 keen_brahmagupta[76704]: }
Nov 22 00:24:21 np0005531754 systemd[1]: libpod-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope: Deactivated successfully.
Nov 22 00:24:21 np0005531754 podman[76688]: 2025-11-22 05:24:21.367183119 +0000 UTC m=+0.812944940 container died be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:24:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800-merged.mount: Deactivated successfully.
Nov 22 00:24:21 np0005531754 podman[76688]: 2025-11-22 05:24:21.427222572 +0000 UTC m=+0.872984333 container remove be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:21 np0005531754 systemd[1]: libpod-conmon-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope: Deactivated successfully.
Nov 22 00:24:21 np0005531754 podman[76744]: 2025-11-22 05:24:21.521681639 +0000 UTC m=+0.063169457 container create 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:24:21 np0005531754 systemd[1]: Started libpod-conmon-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope.
Nov 22 00:24:21 np0005531754 podman[76744]: 2025-11-22 05:24:21.494654502 +0000 UTC m=+0.036142370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:21 np0005531754 podman[76744]: 2025-11-22 05:24:21.642804765 +0000 UTC m=+0.184292633 container init 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:21 np0005531754 podman[76744]: 2025-11-22 05:24:21.648555407 +0000 UTC m=+0.190043185 container start 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:21 np0005531754 podman[76744]: 2025-11-22 05:24:21.76166826 +0000 UTC m=+0.303156038 container attach 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 00:24:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3208678017' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:24:22 np0005531754 systemd[1]: libpod-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope: Deactivated successfully.
Nov 22 00:24:22 np0005531754 podman[76744]: 2025-11-22 05:24:22.176641304 +0000 UTC m=+0.718129112 container died 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:24:22 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:22 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3208678017' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:24:22 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2-merged.mount: Deactivated successfully.
Nov 22 00:24:23 np0005531754 podman[76744]: 2025-11-22 05:24:23.115746461 +0000 UTC m=+1.657234249 container remove 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:23 np0005531754 systemd[1]: libpod-conmon-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope: Deactivated successfully.
Nov 22 00:24:23 np0005531754 podman[76801]: 2025-11-22 05:24:23.27172007 +0000 UTC m=+0.125124331 container create 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:23 np0005531754 podman[76801]: 2025-11-22 05:24:23.181972849 +0000 UTC m=+0.035377140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:23 np0005531754 systemd[1]: Started libpod-conmon-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope.
Nov 22 00:24:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:23 np0005531754 podman[76801]: 2025-11-22 05:24:23.514616158 +0000 UTC m=+0.368020429 container init 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:23 np0005531754 podman[76801]: 2025-11-22 05:24:23.568418666 +0000 UTC m=+0.421822907 container start 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:24:23 np0005531754 podman[76801]: 2025-11-22 05:24:23.628950862 +0000 UTC m=+0.482355203 container attach 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mscchl(active, since 5s)
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 00:24:24 np0005531754 systemd[1]: libpod-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope: Deactivated successfully.
Nov 22 00:24:24 np0005531754 podman[76801]: 2025-11-22 05:24:24.222071446 +0000 UTC m=+1.075475727 container died 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:24:24 np0005531754 systemd[1]: var-lib-containers-storage-overlay-66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d-merged.mount: Deactivated successfully.
Nov 22 00:24:24 np0005531754 podman[76801]: 2025-11-22 05:24:24.277074206 +0000 UTC m=+1.130478467 container remove 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:24 np0005531754 systemd[1]: libpod-conmon-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope: Deactivated successfully.
Nov 22 00:24:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: ignoring --setuser ceph since I am not root
Nov 22 00:24:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: ignoring --setgroup ceph since I am not root
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Nov 22 00:24:24 np0005531754 podman[76856]: 2025-11-22 05:24:24.330322959 +0000 UTC m=+0.037102845 container create 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:24 np0005531754 systemd[1]: Started libpod-conmon-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope.
Nov 22 00:24:24 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:24 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:24 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:24 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:24 np0005531754 podman[76856]: 2025-11-22 05:24:24.314684214 +0000 UTC m=+0.021464120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Nov 22 00:24:24 np0005531754 podman[76856]: 2025-11-22 05:24:24.43280454 +0000 UTC m=+0.139584436 container init 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:24 np0005531754 podman[76856]: 2025-11-22 05:24:24.436960949 +0000 UTC m=+0.143740835 container start 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:24 np0005531754 podman[76856]: 2025-11-22 05:24:24.440213796 +0000 UTC m=+0.146993682 container attach 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:24.723+0000 7f53cd216140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Nov 22 00:24:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:24.974+0000 7f53cd216140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:24:24 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 00:24:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651815984' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]: {
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]:    "epoch": 5,
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]:    "available": true,
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]:    "active_name": "compute-0.mscchl",
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]:    "num_standby": 0
Nov 22 00:24:24 np0005531754 elastic_thompson[76897]: }
Nov 22 00:24:25 np0005531754 systemd[1]: libpod-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope: Deactivated successfully.
Nov 22 00:24:25 np0005531754 podman[76856]: 2025-11-22 05:24:25.001356291 +0000 UTC m=+0.708136177 container died 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:24:25 np0005531754 systemd[1]: var-lib-containers-storage-overlay-df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc-merged.mount: Deactivated successfully.
Nov 22 00:24:25 np0005531754 podman[76856]: 2025-11-22 05:24:25.039074999 +0000 UTC m=+0.745854885 container remove 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:25 np0005531754 systemd[1]: libpod-conmon-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope: Deactivated successfully.
Nov 22 00:24:25 np0005531754 podman[76934]: 2025-11-22 05:24:25.135579496 +0000 UTC m=+0.068011253 container create a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:25 np0005531754 systemd[1]: Started libpod-conmon-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope.
Nov 22 00:24:25 np0005531754 podman[76934]: 2025-11-22 05:24:25.106738022 +0000 UTC m=+0.039169849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:25 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:25 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 00:24:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:25 np0005531754 podman[76934]: 2025-11-22 05:24:25.24028496 +0000 UTC m=+0.172716747 container init a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:24:25 np0005531754 podman[76934]: 2025-11-22 05:24:25.249505594 +0000 UTC m=+0.181937381 container start a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:24:25 np0005531754 podman[76934]: 2025-11-22 05:24:25.25407811 +0000 UTC m=+0.186509907 container attach a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:26 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Nov 22 00:24:27 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:27.178+0000 7f53cd216140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:24:27 np0005531754 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:24:27 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Nov 22 00:24:28 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Nov 22 00:24:28 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:28.865+0000 7f53cd216140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:24:28 np0005531754 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:24:28 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 00:24:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 00:24:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 00:24:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]:  from numpy import show_config as show_numpy_config
Nov 22 00:24:29 np0005531754 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:24:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:29.385+0000 7f53cd216140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:24:29 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Nov 22 00:24:29 np0005531754 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:24:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:29.628+0000 7f53cd216140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:24:29 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Nov 22 00:24:29 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Nov 22 00:24:30 np0005531754 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 00:24:30 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:30.095+0000 7f53cd216140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 00:24:30 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Nov 22 00:24:31 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Nov 22 00:24:32 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 00:24:32 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Nov 22 00:24:32 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Nov 22 00:24:33 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:33.636+0000 7f53cd216140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 00:24:33 np0005531754 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 00:24:33 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.300+0000 7f53cd216140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.571+0000 7f53cd216140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.794+0000 7f53cd216140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 00:24:34 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 00:24:35 np0005531754 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 00:24:35 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Nov 22 00:24:35 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:35.066+0000 7f53cd216140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 00:24:35 np0005531754 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 00:24:35 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Nov 22 00:24:35 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:35.311+0000 7f53cd216140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 00:24:36 np0005531754 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 00:24:36 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:36.312+0000 7f53cd216140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 00:24:36 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Nov 22 00:24:36 np0005531754 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 00:24:36 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Nov 22 00:24:36 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:36.617+0000 7f53cd216140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 00:24:37 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Nov 22 00:24:38 np0005531754 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 00:24:38 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Nov 22 00:24:38 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:38.047+0000 7f53cd216140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.091+0000 7f53cd216140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Nov 22 00:24:40 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.338+0000 7f53cd216140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.597+0000 7f53cd216140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Nov 22 00:24:40 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 00:24:41 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.128+0000 7f53cd216140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 00:24:41 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.370+0000 7f53cd216140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 00:24:41 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 00:24:41 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.992+0000 7f53cd216140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 00:24:42 np0005531754 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:42.672+0000 7f53cd216140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 00:24:42 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 00:24:43 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:43.388+0000 7f53cd216140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 00:24:43 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:43.623+0000 7f53cd216140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mscchl restarted
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mscchl
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x5579d3e051e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mscchl(active, starting, since 0.0117316s)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Starting
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Manager daemon compute-0.mscchl is now available
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:24:43
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: Active manager daemon compute-0.mscchl restarted
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: Activating manager daemon compute-0.mscchl
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: Manager daemon compute-0.mscchl is now available
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: crash
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Starting
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: progress
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [progress INFO root] Loading...
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [progress INFO root] No stored events to load
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: restful
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: status
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"} v 0) v1
Nov 22 00:24:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Nov 22 00:24:43 np0005531754 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mscchl(active, since 1.02397s)
Nov 22 00:24:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 00:24:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 00:24:44 np0005531754 brave_carson[76950]: {
Nov 22 00:24:44 np0005531754 brave_carson[76950]:    "mgrmap_epoch": 7,
Nov 22 00:24:44 np0005531754 brave_carson[76950]:    "initialized": true
Nov 22 00:24:44 np0005531754 brave_carson[76950]: }
Nov 22 00:24:44 np0005531754 systemd[1]: libpod-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope: Deactivated successfully.
Nov 22 00:24:44 np0005531754 podman[76934]: 2025-11-22 05:24:44.681027671 +0000 UTC m=+19.613459428 container died a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: Found migration_current of "None". Setting to last migration.
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 22 00:24:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:44 np0005531754 systemd[1]: var-lib-containers-storage-overlay-15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f-merged.mount: Deactivated successfully.
Nov 22 00:24:44 np0005531754 podman[76934]: 2025-11-22 05:24:44.86837776 +0000 UTC m=+19.800809547 container remove a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:44 np0005531754 systemd[1]: libpod-conmon-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope: Deactivated successfully.
Nov 22 00:24:44 np0005531754 podman[77114]: 2025-11-22 05:24:44.95990445 +0000 UTC m=+0.063481198 container create 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 22 00:24:45 np0005531754 systemd[1]: Started libpod-conmon-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope.
Nov 22 00:24:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:45 np0005531754 podman[77114]: 2025-11-22 05:24:44.934586943 +0000 UTC m=+0.038163711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:45 np0005531754 podman[77114]: 2025-11-22 05:24:45.067572345 +0000 UTC m=+0.171149133 container init 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:24:45 np0005531754 podman[77114]: 2025-11-22 05:24:45.078110516 +0000 UTC m=+0.181687284 container start 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:45 np0005531754 podman[77114]: 2025-11-22 05:24:45.08262758 +0000 UTC m=+0.186204348 container attach 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:24:45 np0005531754 systemd[1]: libpod-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope: Deactivated successfully.
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 00:24:45 np0005531754 podman[77169]: 2025-11-22 05:24:45.789142026 +0000 UTC m=+0.024652870 container died 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608-merged.mount: Deactivated successfully.
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:45 np0005531754 podman[77169]: 2025-11-22 05:24:45.839926525 +0000 UTC m=+0.075437289 container remove 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mscchl(active, since 2s)
Nov 22 00:24:45 np0005531754 systemd[1]: libpod-conmon-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope: Deactivated successfully.
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 00:24:45 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:24:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:24:45 np0005531754 podman[77195]: 2025-11-22 05:24:45.931826575 +0000 UTC m=+0.058437699 container create b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:24:45 np0005531754 systemd[1]: Started libpod-conmon-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope.
Nov 22 00:24:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:45.911379713 +0000 UTC m=+0.037990887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:46.007197981 +0000 UTC m=+0.133809125 container init b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:46.013377652 +0000 UTC m=+0.139988786 container start b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:46.017303679 +0000 UTC m=+0.143914853 container attach b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_user
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_config
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 22 00:24:46 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 22 00:24:46 np0005531754 zealous_brahmagupta[77212]: ssh user set to ceph-admin. sudo will be used
Nov 22 00:24:46 np0005531754 systemd[1]: libpod-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope: Deactivated successfully.
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:46.555444699 +0000 UTC m=+0.682055913 container died b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 00:24:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404-merged.mount: Deactivated successfully.
Nov 22 00:24:46 np0005531754 podman[77195]: 2025-11-22 05:24:46.608726556 +0000 UTC m=+0.735337720 container remove b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:24:46 np0005531754 systemd[1]: libpod-conmon-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope: Deactivated successfully.
Nov 22 00:24:46 np0005531754 podman[77250]: 2025-11-22 05:24:46.681011326 +0000 UTC m=+0.047449287 container create 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:24:46 np0005531754 systemd[1]: Started libpod-conmon-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope.
Nov 22 00:24:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:46 np0005531754 podman[77250]: 2025-11-22 05:24:46.662410574 +0000 UTC m=+0.028848525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:46 np0005531754 podman[77250]: 2025-11-22 05:24:46.765531614 +0000 UTC m=+0.131969615 container init 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:24:46 np0005531754 podman[77250]: 2025-11-22 05:24:46.778552073 +0000 UTC m=+0.144990044 container start 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:46 np0005531754 podman[77250]: 2025-11-22 05:24:46.783663994 +0000 UTC m=+0.150102015 container attach 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923970 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Set ssh private key
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 22 00:24:47 np0005531754 systemd[1]: libpod-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope: Deactivated successfully.
Nov 22 00:24:47 np0005531754 podman[77250]: 2025-11-22 05:24:47.362865593 +0000 UTC m=+0.729303524 container died 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc-merged.mount: Deactivated successfully.
Nov 22 00:24:47 np0005531754 podman[77250]: 2025-11-22 05:24:47.397377014 +0000 UTC m=+0.763814945 container remove 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:47 np0005531754 systemd[1]: libpod-conmon-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope: Deactivated successfully.
Nov 22 00:24:47 np0005531754 podman[77304]: 2025-11-22 05:24:47.49564955 +0000 UTC m=+0.071110430 container create 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:47 np0005531754 systemd[1]: Started libpod-conmon-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope.
Nov 22 00:24:47 np0005531754 podman[77304]: 2025-11-22 05:24:47.463836114 +0000 UTC m=+0.039297024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:47 np0005531754 podman[77304]: 2025-11-22 05:24:47.60786606 +0000 UTC m=+0.183326890 container init 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:24:47 np0005531754 podman[77304]: 2025-11-22 05:24:47.616758336 +0000 UTC m=+0.192219176 container start 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:24:47 np0005531754 podman[77304]: 2025-11-22 05:24:47.633608689 +0000 UTC m=+0.209069739 container attach 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:24:47 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: Set ssh ssh_user
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: Set ssh ssh_config
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: ssh user set to ceph-admin. sudo will be used
Nov 22 00:24:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 22 00:24:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:48 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 22 00:24:48 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 22 00:24:48 np0005531754 systemd[1]: libpod-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope: Deactivated successfully.
Nov 22 00:24:48 np0005531754 podman[77304]: 2025-11-22 05:24:48.135068119 +0000 UTC m=+0.710528959 container died 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0-merged.mount: Deactivated successfully.
Nov 22 00:24:48 np0005531754 podman[77304]: 2025-11-22 05:24:48.17796945 +0000 UTC m=+0.753430290 container remove 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:24:48 np0005531754 systemd[1]: libpod-conmon-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope: Deactivated successfully.
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.241298094 +0000 UTC m=+0.046569733 container create 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:48 np0005531754 systemd[1]: Started libpod-conmon-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope.
Nov 22 00:24:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.217766506 +0000 UTC m=+0.023038175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.31304801 +0000 UTC m=+0.118319699 container init 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.321842532 +0000 UTC m=+0.127114201 container start 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.326569052 +0000 UTC m=+0.131840721 container attach 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:48 np0005531754 friendly_boyd[77373]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIPiZ7ibrX9u0jc8n2TadxqUhEaLFpm5hoxNk7E8sPzVr+7md04KUsVyLl7YefTfTCAtLesLv0Rgu5rzJ2QOUo0OMuFaPi6qKRqxC/WvpAyqe3xQYxOslHfgzEHMI8+kcs7/1ziCQ9EVoMBSqRsuBeOyMVLTs/yzR6xTv8E9xwbovlADFyvmgzXwA3Z+oeMxT0iudT9c50Hi6PeQBfJypCJyMsh2/Rzc3GKzKNVgV8DKirHuSqrZHTGzcdFgwgw2UEEt6KVNxLzPPsOWLuCiq78FKHFgVLSnFMGltzRbFNcegXdk6LQUSX5PETF+owCAWMWDgUaDWhwPTo7FmmMvW7GYSi3TI+jYuuWpy918L1Wh9Uyc67WsyCoELg2CIejA92oIWdIl5DlBmtbcaM0aBpJRFVxUBYE6R9envdGQOg+u+t8QrJb6MS6ebG+tH6CbFn8Snf6CXXokl7Q/PJuZWCbe1RP2PisYlql3o9zPU1hIA73eC66p13WiW9z3YCiX8= zuul@controller
Nov 22 00:24:48 np0005531754 systemd[1]: libpod-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope: Deactivated successfully.
Nov 22 00:24:48 np0005531754 podman[77357]: 2025-11-22 05:24:48.898234975 +0000 UTC m=+0.703506614 container died 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027-merged.mount: Deactivated successfully.
Nov 22 00:24:49 np0005531754 podman[77357]: 2025-11-22 05:24:49.107865838 +0000 UTC m=+0.913137507 container remove 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:49 np0005531754 systemd[1]: libpod-conmon-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope: Deactivated successfully.
Nov 22 00:24:49 np0005531754 ceph-mon[75840]: Set ssh ssh_identity_key
Nov 22 00:24:49 np0005531754 ceph-mon[75840]: Set ssh private key
Nov 22 00:24:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:49 np0005531754 ceph-mon[75840]: Set ssh ssh_identity_pub
Nov 22 00:24:49 np0005531754 podman[77409]: 2025-11-22 05:24:49.178147433 +0000 UTC m=+0.050359538 container create 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:49 np0005531754 systemd[1]: Started libpod-conmon-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope.
Nov 22 00:24:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:49 np0005531754 podman[77409]: 2025-11-22 05:24:49.152555889 +0000 UTC m=+0.024768084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:49 np0005531754 podman[77409]: 2025-11-22 05:24:49.253706964 +0000 UTC m=+0.125919089 container init 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:24:49 np0005531754 podman[77409]: 2025-11-22 05:24:49.260754858 +0000 UTC m=+0.132966983 container start 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:49 np0005531754 podman[77409]: 2025-11-22 05:24:49.263951436 +0000 UTC m=+0.136163561 container attach 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:24:49 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:50 np0005531754 systemd-logind[798]: New session 20 of user ceph-admin.
Nov 22 00:24:50 np0005531754 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 00:24:50 np0005531754 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 00:24:50 np0005531754 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 00:24:50 np0005531754 systemd[1]: Starting User Manager for UID 42477...
Nov 22 00:24:50 np0005531754 systemd[77455]: Queued start job for default target Main User Target.
Nov 22 00:24:50 np0005531754 systemd[77455]: Created slice User Application Slice.
Nov 22 00:24:50 np0005531754 systemd[77455]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 00:24:50 np0005531754 systemd[77455]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 00:24:50 np0005531754 systemd[77455]: Reached target Paths.
Nov 22 00:24:50 np0005531754 systemd[77455]: Reached target Timers.
Nov 22 00:24:50 np0005531754 systemd[77455]: Starting D-Bus User Message Bus Socket...
Nov 22 00:24:50 np0005531754 systemd[77455]: Starting Create User's Volatile Files and Directories...
Nov 22 00:24:50 np0005531754 systemd[77455]: Finished Create User's Volatile Files and Directories.
Nov 22 00:24:50 np0005531754 systemd[77455]: Listening on D-Bus User Message Bus Socket.
Nov 22 00:24:50 np0005531754 systemd[77455]: Reached target Sockets.
Nov 22 00:24:50 np0005531754 systemd[77455]: Reached target Basic System.
Nov 22 00:24:50 np0005531754 systemd[77455]: Reached target Main User Target.
Nov 22 00:24:50 np0005531754 systemd[77455]: Startup finished in 161ms.
Nov 22 00:24:50 np0005531754 systemd-logind[798]: New session 22 of user ceph-admin.
Nov 22 00:24:50 np0005531754 systemd[1]: Started User Manager for UID 42477.
Nov 22 00:24:50 np0005531754 systemd[1]: Started Session 20 of User ceph-admin.
Nov 22 00:24:50 np0005531754 systemd[1]: Started Session 22 of User ceph-admin.
Nov 22 00:24:50 np0005531754 systemd-logind[798]: New session 23 of user ceph-admin.
Nov 22 00:24:50 np0005531754 systemd[1]: Started Session 23 of User ceph-admin.
Nov 22 00:24:51 np0005531754 systemd-logind[798]: New session 24 of user ceph-admin.
Nov 22 00:24:51 np0005531754 systemd[1]: Started Session 24 of User ceph-admin.
Nov 22 00:24:51 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 22 00:24:51 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 22 00:24:51 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:51 np0005531754 systemd-logind[798]: New session 25 of user ceph-admin.
Nov 22 00:24:51 np0005531754 systemd[1]: Started Session 25 of User ceph-admin.
Nov 22 00:24:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053068 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:24:52 np0005531754 systemd-logind[798]: New session 26 of user ceph-admin.
Nov 22 00:24:52 np0005531754 systemd[1]: Started Session 26 of User ceph-admin.
Nov 22 00:24:52 np0005531754 ceph-mon[75840]: Deploying cephadm binary to compute-0
Nov 22 00:24:52 np0005531754 systemd-logind[798]: New session 27 of user ceph-admin.
Nov 22 00:24:52 np0005531754 systemd[1]: Started Session 27 of User ceph-admin.
Nov 22 00:24:53 np0005531754 systemd-logind[798]: New session 28 of user ceph-admin.
Nov 22 00:24:53 np0005531754 systemd[1]: Started Session 28 of User ceph-admin.
Nov 22 00:24:53 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:53 np0005531754 systemd-logind[798]: New session 29 of user ceph-admin.
Nov 22 00:24:53 np0005531754 systemd[1]: Started Session 29 of User ceph-admin.
Nov 22 00:24:54 np0005531754 systemd-logind[798]: New session 30 of user ceph-admin.
Nov 22 00:24:54 np0005531754 systemd[1]: Started Session 30 of User ceph-admin.
Nov 22 00:24:54 np0005531754 systemd-logind[798]: New session 31 of user ceph-admin.
Nov 22 00:24:54 np0005531754 systemd[1]: Started Session 31 of User ceph-admin.
Nov 22 00:24:55 np0005531754 systemd-logind[798]: New session 32 of user ceph-admin.
Nov 22 00:24:55 np0005531754 systemd[1]: Started Session 32 of User ceph-admin.
Nov 22 00:24:55 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:24:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:55 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Nov 22 00:24:55 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 00:24:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:24:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:24:55 np0005531754 strange_boyd[77425]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 00:24:55 np0005531754 systemd[1]: libpod-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope: Deactivated successfully.
Nov 22 00:24:55 np0005531754 podman[78074]: 2025-11-22 05:24:55.867142695 +0000 UTC m=+0.047059946 container died 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:24:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8-merged.mount: Deactivated successfully.
Nov 22 00:24:55 np0005531754 podman[78074]: 2025-11-22 05:24:55.911021944 +0000 UTC m=+0.090939085 container remove 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:24:55 np0005531754 systemd[1]: libpod-conmon-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope: Deactivated successfully.
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.003121409 +0000 UTC m=+0.059147939 container create 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:24:56 np0005531754 systemd[1]: Started libpod-conmon-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope.
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:55.970685736 +0000 UTC m=+0.026712296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:56 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.096024158 +0000 UTC m=+0.152050708 container init 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.10374247 +0000 UTC m=+0.159768990 container start 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.1069663 +0000 UTC m=+0.162992860 container attach 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.383198067 +0000 UTC m=+0.062228446 container create 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:24:56 np0005531754 systemd[1]: Started libpod-conmon-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope.
Nov 22 00:24:56 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.354468995 +0000 UTC m=+0.033499454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.457694128 +0000 UTC m=+0.136724487 container init 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.46358525 +0000 UTC m=+0.142615619 container start 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.466948613 +0000 UTC m=+0.145978972 container attach 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:24:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:56 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 22 00:24:56 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:56 np0005531754 gracious_keller[78189]: Scheduled mon update...
Nov 22 00:24:56 np0005531754 systemd[1]: libpod-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope: Deactivated successfully.
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.687046184 +0000 UTC m=+0.743072704 container died 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:24:56 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b-merged.mount: Deactivated successfully.
Nov 22 00:24:56 np0005531754 podman[78127]: 2025-11-22 05:24:56.721607316 +0000 UTC m=+0.777633836 container remove 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:56 np0005531754 systemd[1]: libpod-conmon-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope: Deactivated successfully.
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: Added host compute-0
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:56 np0005531754 interesting_ishizaka[78250]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 00:24:56 np0005531754 podman[78277]: 2025-11-22 05:24:56.780548419 +0000 UTC m=+0.042419570 container create 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:56 np0005531754 systemd[1]: libpod-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope: Deactivated successfully.
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.78202378 +0000 UTC m=+0.461054179 container died 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:56 np0005531754 systemd[1]: Started libpod-conmon-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope.
Nov 22 00:24:56 np0005531754 podman[78224]: 2025-11-22 05:24:56.827708427 +0000 UTC m=+0.506738786 container remove 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:24:56 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:56 np0005531754 systemd[1]: libpod-conmon-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope: Deactivated successfully.
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 22 00:24:56 np0005531754 podman[78277]: 2025-11-22 05:24:56.760093435 +0000 UTC m=+0.021964606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:56 np0005531754 podman[78277]: 2025-11-22 05:24:56.864773718 +0000 UTC m=+0.126644879 container init 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:24:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:56 np0005531754 podman[78277]: 2025-11-22 05:24:56.871283357 +0000 UTC m=+0.133154498 container start 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:24:56 np0005531754 podman[78277]: 2025-11-22 05:24:56.874021492 +0000 UTC m=+0.135892723 container attach 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:24:56 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6f10cb3c335c3a39fd1a5b5caf187f93150b270fdc2df42765f069528cda3807-merged.mount: Deactivated successfully.
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:24:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:57 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 22 00:24:57 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:57 np0005531754 inspiring_greider[78305]: Scheduled mgr update...
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:57 np0005531754 systemd[1]: libpod-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope: Deactivated successfully.
Nov 22 00:24:57 np0005531754 podman[78277]: 2025-11-22 05:24:57.418122436 +0000 UTC m=+0.679993577 container died 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:24:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf-merged.mount: Deactivated successfully.
Nov 22 00:24:57 np0005531754 podman[78277]: 2025-11-22 05:24:57.4636284 +0000 UTC m=+0.725499551 container remove 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:24:57 np0005531754 systemd[1]: libpod-conmon-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope: Deactivated successfully.
Nov 22 00:24:57 np0005531754 podman[78487]: 2025-11-22 05:24:57.54825088 +0000 UTC m=+0.056412164 container create cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:24:57 np0005531754 systemd[1]: Started libpod-conmon-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope.
Nov 22 00:24:57 np0005531754 podman[78487]: 2025-11-22 05:24:57.528818344 +0000 UTC m=+0.036979678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:57 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:57 np0005531754 podman[78487]: 2025-11-22 05:24:57.648247153 +0000 UTC m=+0.156408517 container init cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:24:57 np0005531754 podman[78487]: 2025-11-22 05:24:57.661906939 +0000 UTC m=+0.170068263 container start cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:57 np0005531754 podman[78487]: 2025-11-22 05:24:57.667017761 +0000 UTC m=+0.175179125 container attach cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: Saving service mon spec with placement count:5
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:58 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:58 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service crash spec with placement *
Nov 22 00:24:58 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 22 00:24:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 00:24:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:58 np0005531754 charming_driscoll[78536]: Scheduled crash update...
Nov 22 00:24:58 np0005531754 systemd[1]: libpod-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope: Deactivated successfully.
Nov 22 00:24:58 np0005531754 podman[78487]: 2025-11-22 05:24:58.25380877 +0000 UTC m=+0.761970064 container died cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:24:58 np0005531754 podman[78673]: 2025-11-22 05:24:58.27052765 +0000 UTC m=+0.060471026 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:58 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91-merged.mount: Deactivated successfully.
Nov 22 00:24:58 np0005531754 podman[78487]: 2025-11-22 05:24:58.316424694 +0000 UTC m=+0.824585988 container remove cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:24:58 np0005531754 systemd[1]: libpod-conmon-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope: Deactivated successfully.
Nov 22 00:24:58 np0005531754 podman[78705]: 2025-11-22 05:24:58.379433829 +0000 UTC m=+0.042419260 container create 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:24:58 np0005531754 systemd[1]: Started libpod-conmon-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope.
Nov 22 00:24:58 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:58 np0005531754 podman[78705]: 2025-11-22 05:24:58.358717099 +0000 UTC m=+0.021702540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:58 np0005531754 podman[78705]: 2025-11-22 05:24:58.461204561 +0000 UTC m=+0.124190002 container init 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:24:58 np0005531754 podman[78705]: 2025-11-22 05:24:58.472866512 +0000 UTC m=+0.135851923 container start 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:58 np0005531754 podman[78705]: 2025-11-22 05:24:58.475972128 +0000 UTC m=+0.138957539 container attach 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:24:58 np0005531754 podman[78673]: 2025-11-22 05:24:58.557354389 +0000 UTC m=+0.347297755 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:24:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:24:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3448207332' entity='client.admin' 
Nov 22 00:24:59 np0005531754 systemd[1]: libpod-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope: Deactivated successfully.
Nov 22 00:24:59 np0005531754 podman[78705]: 2025-11-22 05:24:59.091260751 +0000 UTC m=+0.754246202 container died 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:24:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13-merged.mount: Deactivated successfully.
Nov 22 00:24:59 np0005531754 podman[78705]: 2025-11-22 05:24:59.135551531 +0000 UTC m=+0.798536942 container remove 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:24:59 np0005531754 systemd[1]: libpod-conmon-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope: Deactivated successfully.
Nov 22 00:24:59 np0005531754 podman[78891]: 2025-11-22 05:24:59.231291127 +0000 UTC m=+0.058876092 container create 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: Saving service mgr spec with placement count:2
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: Saving service crash spec with placement *
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3448207332' entity='client.admin' 
Nov 22 00:24:59 np0005531754 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78916 (sysctl)
Nov 22 00:24:59 np0005531754 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 22 00:24:59 np0005531754 systemd[1]: Started libpod-conmon-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope.
Nov 22 00:24:59 np0005531754 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 22 00:24:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:24:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:24:59 np0005531754 podman[78891]: 2025-11-22 05:24:59.204414298 +0000 UTC m=+0.031999353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:24:59 np0005531754 podman[78891]: 2025-11-22 05:24:59.313398019 +0000 UTC m=+0.140983044 container init 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:24:59 np0005531754 podman[78891]: 2025-11-22 05:24:59.324452883 +0000 UTC m=+0.152037838 container start 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:24:59 np0005531754 podman[78891]: 2025-11-22 05:24:59.327874937 +0000 UTC m=+0.155459992 container attach 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:24:59 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:24:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 22 00:24:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:24:59 np0005531754 systemd[1]: libpod-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope: Deactivated successfully.
Nov 22 00:24:59 np0005531754 podman[79060]: 2025-11-22 05:24:59.950187174 +0000 UTC m=+0.032639749 container died 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:24:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209-merged.mount: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79060]: 2025-11-22 05:25:00.011198835 +0000 UTC m=+0.093651430 container remove 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:25:00 np0005531754 systemd[1]: libpod-conmon-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.087149616 +0000 UTC m=+0.045600436 container create d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:00 np0005531754 systemd[1]: Started libpod-conmon-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope.
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.163606832 +0000 UTC m=+0.122057712 container init d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.068936104 +0000 UTC m=+0.027386944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.171295873 +0000 UTC m=+0.129746713 container start d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.175932211 +0000 UTC m=+0.134383081 container attach d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.713924116 +0000 UTC m=+0.048893227 container create 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:25:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:00 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Added label _admin to host compute-0
Nov 22 00:25:00 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 22 00:25:00 np0005531754 jolly_lovelace[79115]: Added label _admin to host compute-0
Nov 22 00:25:00 np0005531754 systemd[1]: Started libpod-conmon-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope.
Nov 22 00:25:00 np0005531754 systemd[1]: libpod-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.758305709 +0000 UTC m=+0.716756529 container died d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:25:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a-merged.mount: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.693198305 +0000 UTC m=+0.028167436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.789867027 +0000 UTC m=+0.124836178 container init 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.797139228 +0000 UTC m=+0.132108349 container start 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:25:00 np0005531754 infallible_jang[79300]: 167 167
Nov 22 00:25:00 np0005531754 systemd[1]: libpod-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.807306008 +0000 UTC m=+0.142275159 container attach 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:00 np0005531754 podman[79081]: 2025-11-22 05:25:00.811992577 +0000 UTC m=+0.770443407 container remove d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.815971567 +0000 UTC m=+0.150940688 container died 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:25:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-02fadf327700d05a91fe3eba7f82c79fc9be839fd7e722b8d9501a0d0904096b-merged.mount: Deactivated successfully.
Nov 22 00:25:00 np0005531754 systemd[1]: libpod-conmon-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope: Deactivated successfully.
Nov 22 00:25:00 np0005531754 podman[79281]: 2025-11-22 05:25:00.854544768 +0000 UTC m=+0.189513889 container remove 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:00 np0005531754 systemd[1]: libpod-conmon-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope: Deactivated successfully.
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:00 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:00 np0005531754 podman[79323]: 2025-11-22 05:25:00.888901325 +0000 UTC m=+0.053558886 container create a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 00:25:00 np0005531754 systemd[1]: Started libpod-conmon-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope.
Nov 22 00:25:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:00 np0005531754 podman[79323]: 2025-11-22 05:25:00.863407073 +0000 UTC m=+0.028064644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:00 np0005531754 podman[79323]: 2025-11-22 05:25:00.974082361 +0000 UTC m=+0.138739902 container init a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:00 np0005531754 podman[79323]: 2025-11-22 05:25:00.985087284 +0000 UTC m=+0.149744855 container start a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:00 np0005531754 podman[79323]: 2025-11-22 05:25:00.990577765 +0000 UTC m=+0.155235316 container attach a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 22 00:25:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 22 00:25:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1330127533' entity='client.admin' 
Nov 22 00:25:01 np0005531754 systemd[1]: libpod-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope: Deactivated successfully.
Nov 22 00:25:01 np0005531754 podman[79323]: 2025-11-22 05:25:01.553358482 +0000 UTC m=+0.718016043 container died a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:25:01 np0005531754 systemd[1]: var-lib-containers-storage-overlay-026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef-merged.mount: Deactivated successfully.
Nov 22 00:25:01 np0005531754 podman[79323]: 2025-11-22 05:25:01.604180473 +0000 UTC m=+0.768837994 container remove a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:01 np0005531754 systemd[1]: libpod-conmon-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope: Deactivated successfully.
Nov 22 00:25:01 np0005531754 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 00:25:01 np0005531754 podman[79382]: 2025-11-22 05:25:01.688758591 +0000 UTC m=+0.056453065 container create 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:01 np0005531754 systemd[1]: Started libpod-conmon-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope.
Nov 22 00:25:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:01 np0005531754 podman[79382]: 2025-11-22 05:25:01.66729169 +0000 UTC m=+0.034986214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:01 np0005531754 podman[79382]: 2025-11-22 05:25:01.78093897 +0000 UTC m=+0.148633454 container init 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:25:01 np0005531754 podman[79382]: 2025-11-22 05:25:01.787066008 +0000 UTC m=+0.154760482 container start 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:25:01 np0005531754 podman[79382]: 2025-11-22 05:25:01.790188015 +0000 UTC m=+0.157882539 container attach 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 00:25:01 np0005531754 ceph-mon[75840]: Added label _admin to host compute-0
Nov 22 00:25:01 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1330127533' entity='client.admin' 
Nov 22 00:25:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 22 00:25:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1299912635' entity='client.admin' 
Nov 22 00:25:02 np0005531754 great_spence[79399]: set mgr/dashboard/cluster/status
Nov 22 00:25:02 np0005531754 systemd[1]: libpod-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope: Deactivated successfully.
Nov 22 00:25:02 np0005531754 podman[79382]: 2025-11-22 05:25:02.444147694 +0000 UTC m=+0.811842168 container died 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:25:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081-merged.mount: Deactivated successfully.
Nov 22 00:25:02 np0005531754 podman[79382]: 2025-11-22 05:25:02.493668687 +0000 UTC m=+0.861363201 container remove 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:25:02 np0005531754 systemd[1]: libpod-conmon-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope: Deactivated successfully.
Nov 22 00:25:02 np0005531754 podman[79444]: 2025-11-22 05:25:02.717365878 +0000 UTC m=+0.055786778 container create 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:02 np0005531754 systemd[1]: Started libpod-conmon-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope.
Nov 22 00:25:02 np0005531754 podman[79444]: 2025-11-22 05:25:02.691350601 +0000 UTC m=+0.029771521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:02 np0005531754 podman[79444]: 2025-11-22 05:25:02.829362402 +0000 UTC m=+0.167783282 container init 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:25:02 np0005531754 podman[79444]: 2025-11-22 05:25:02.845689211 +0000 UTC m=+0.184110111 container start 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:25:02 np0005531754 podman[79444]: 2025-11-22 05:25:02.85037155 +0000 UTC m=+0.188792410 container attach 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:25:03 np0005531754 python3[79490]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.292444194 +0000 UTC m=+0.071092709 container create e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:25:03 np0005531754 systemd[1]: Started libpod-conmon-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope.
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.257965774 +0000 UTC m=+0.036614329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.389796825 +0000 UTC m=+0.168445410 container init e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.399876482 +0000 UTC m=+0.178524997 container start e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.404677925 +0000 UTC m=+0.183326440 container attach e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 00:25:03 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1299912635' entity='client.admin' 
Nov 22 00:25:03 np0005531754 ceph-mgr[76134]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 22 00:25:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:03 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 00:25:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 22 00:25:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2682568180' entity='client.admin' 
Nov 22 00:25:03 np0005531754 systemd[1]: libpod-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope: Deactivated successfully.
Nov 22 00:25:03 np0005531754 podman[79491]: 2025-11-22 05:25:03.985110058 +0000 UTC m=+0.763758563 container died e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 00:25:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79-merged.mount: Deactivated successfully.
Nov 22 00:25:04 np0005531754 podman[79491]: 2025-11-22 05:25:04.037357008 +0000 UTC m=+0.816005483 container remove e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:25:04 np0005531754 systemd[1]: libpod-conmon-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope: Deactivated successfully.
Nov 22 00:25:04 np0005531754 determined_franklin[79460]: [
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:    {
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "available": false,
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "ceph_device": false,
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "lsm_data": {},
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "lvs": [],
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "path": "/dev/sr0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "rejected_reasons": [
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "Insufficient space (<5GB)",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "Has a FileSystem"
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        ],
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        "sys_api": {
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "actuators": null,
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "device_nodes": "sr0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "devname": "sr0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "human_readable_size": "482.00 KB",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "id_bus": "ata",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "model": "QEMU DVD-ROM",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "nr_requests": "2",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "parent": "/dev/sr0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "partitions": {},
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "path": "/dev/sr0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "removable": "1",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "rev": "2.5+",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "ro": "0",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "rotational": "1",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "sas_address": "",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "sas_device_handle": "",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "scheduler_mode": "mq-deadline",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "sectors": 0,
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "sectorsize": "2048",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "size": 493568.0,
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "support_discard": "2048",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "type": "disk",
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:            "vendor": "QEMU"
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:        }
Nov 22 00:25:04 np0005531754 determined_franklin[79460]:    }
Nov 22 00:25:04 np0005531754 determined_franklin[79460]: ]
Nov 22 00:25:04 np0005531754 systemd[1]: libpod-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Deactivated successfully.
Nov 22 00:25:04 np0005531754 systemd[1]: libpod-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Consumed 1.555s CPU time.
Nov 22 00:25:04 np0005531754 podman[79444]: 2025-11-22 05:25:04.380226379 +0000 UTC m=+1.718647269 container died 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:25:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280-merged.mount: Deactivated successfully.
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2682568180' entity='client.admin' 
Nov 22 00:25:04 np0005531754 podman[79444]: 2025-11-22 05:25:04.448881749 +0000 UTC m=+1.787302629 container remove 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:04 np0005531754 systemd[1]: libpod-conmon-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Deactivated successfully.
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:25:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:04 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 22 00:25:04 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 22 00:25:05 np0005531754 ansible-async_wrapper.py[81738]: Invoked with j421611236538 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789104.4488592-36405-60921918125236/AnsiballZ_command.py _
Nov 22 00:25:05 np0005531754 ansible-async_wrapper.py[81791]: Starting module and watcher
Nov 22 00:25:05 np0005531754 ansible-async_wrapper.py[81791]: Start watching 81792 (30)
Nov 22 00:25:05 np0005531754 ansible-async_wrapper.py[81792]: Start module (81792)
Nov 22 00:25:05 np0005531754 ansible-async_wrapper.py[81738]: Return async_wrapper task started.
Nov 22 00:25:05 np0005531754 python3[81794]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:05 np0005531754 podman[81844]: 2025-11-22 05:25:05.395886489 +0000 UTC m=+0.049174795 container create 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:05 np0005531754 systemd[1]: Started libpod-conmon-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope.
Nov 22 00:25:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:05 np0005531754 podman[81844]: 2025-11-22 05:25:05.375830917 +0000 UTC m=+0.029119213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:05 np0005531754 podman[81844]: 2025-11-22 05:25:05.48163288 +0000 UTC m=+0.134921136 container init 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:05 np0005531754 podman[81844]: 2025-11-22 05:25:05.492288814 +0000 UTC m=+0.145577070 container start 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:05 np0005531754 podman[81844]: 2025-11-22 05:25:05.495875592 +0000 UTC m=+0.149163848 container attach 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:05 np0005531754 ceph-mon[75840]: Updating compute-0:/etc/ceph/ceph.conf
Nov 22 00:25:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:05 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 00:25:05 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 00:25:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:25:06 np0005531754 quirky_einstein[81883]: 
Nov 22 00:25:06 np0005531754 quirky_einstein[81883]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 00:25:06 np0005531754 systemd[1]: libpod-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope: Deactivated successfully.
Nov 22 00:25:06 np0005531754 podman[81844]: 2025-11-22 05:25:06.039786971 +0000 UTC m=+0.693075227 container died 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:25:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed-merged.mount: Deactivated successfully.
Nov 22 00:25:06 np0005531754 podman[81844]: 2025-11-22 05:25:06.083148125 +0000 UTC m=+0.736436381 container remove 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:06 np0005531754 systemd[1]: libpod-conmon-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope: Deactivated successfully.
Nov 22 00:25:06 np0005531754 ansible-async_wrapper.py[81792]: Module complete (81792)
Nov 22 00:25:06 np0005531754 ceph-mon[75840]: Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 00:25:06 np0005531754 python3[82389]: ansible-ansible.legacy.async_status Invoked with jid=j421611236538.81738 mode=status _async_dir=/root/.ansible_async
Nov 22 00:25:06 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 00:25:06 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 00:25:06 np0005531754 python3[82561]: ansible-ansible.legacy.async_status Invoked with jid=j421611236538.81738 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 00:25:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:07 np0005531754 python3[82738]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 00:25:07 np0005531754 ceph-mon[75840]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 00:25:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:07 np0005531754 python3[82940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:07 np0005531754 podman[82996]: 2025-11-22 05:25:07.968819792 +0000 UTC m=+0.054139121 container create 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:08 np0005531754 systemd[1]: Started libpod-conmon-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope.
Nov 22 00:25:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:08.026535292 +0000 UTC m=+0.111854671 container init 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:08.03228685 +0000 UTC m=+0.117606179 container start 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:08.035068907 +0000 UTC m=+0.120388256 container attach 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:07.943629568 +0000 UTC m=+0.028948917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:08 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 00:25:08 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 00:25:08 np0005531754 ceph-mon[75840]: Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 00:25:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:25:08 np0005531754 hopeful_aryabhata[83054]: 
Nov 22 00:25:08 np0005531754 hopeful_aryabhata[83054]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 00:25:08 np0005531754 systemd[1]: libpod-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope: Deactivated successfully.
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:08.585901525 +0000 UTC m=+0.671220874 container died 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:25:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede-merged.mount: Deactivated successfully.
Nov 22 00:25:08 np0005531754 podman[82996]: 2025-11-22 05:25:08.630066452 +0000 UTC m=+0.715385781 container remove 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:25:08 np0005531754 systemd[1]: libpod-conmon-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope: Deactivated successfully.
Nov 22 00:25:09 np0005531754 python3[83515]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.107372496 +0000 UTC m=+0.036514126 container create 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:09 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1))
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 00:25:09 np0005531754 systemd[1]: Started libpod-conmon-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope.
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:09 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 22 00:25:09 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 22 00:25:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.170256968 +0000 UTC m=+0.099398628 container init 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.176428888 +0000 UTC m=+0.105570528 container start 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.180095349 +0000 UTC m=+0.109236989 container attach 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.091949312 +0000 UTC m=+0.021090982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 22 00:25:09 np0005531754 podman[83749]: 2025-11-22 05:25:09.73465488 +0000 UTC m=+0.109334781 container create a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1937902109' entity='client.admin' 
Nov 22 00:25:09 np0005531754 podman[83749]: 2025-11-22 05:25:09.652231771 +0000 UTC m=+0.026911692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:09 np0005531754 systemd[1]: libpod-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope: Deactivated successfully.
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.76516458 +0000 UTC m=+0.694306220 container died 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:25:09 np0005531754 systemd[1]: Started libpod-conmon-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope.
Nov 22 00:25:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162-merged.mount: Deactivated successfully.
Nov 22 00:25:09 np0005531754 podman[83749]: 2025-11-22 05:25:09.820927456 +0000 UTC m=+0.195607587 container init a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:09 np0005531754 podman[83749]: 2025-11-22 05:25:09.828632938 +0000 UTC m=+0.203312819 container start a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:09 np0005531754 podman[83749]: 2025-11-22 05:25:09.832071752 +0000 UTC m=+0.206751723 container attach a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:25:09 np0005531754 eloquent_pascal[83773]: 167 167
Nov 22 00:25:09 np0005531754 systemd[1]: libpod-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope: Deactivated successfully.
Nov 22 00:25:09 np0005531754 podman[83549]: 2025-11-22 05:25:09.843089966 +0000 UTC m=+0.772231586 container remove 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:25:09 np0005531754 systemd[1]: libpod-conmon-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope: Deactivated successfully.
Nov 22 00:25:09 np0005531754 podman[83785]: 2025-11-22 05:25:09.88752708 +0000 UTC m=+0.033839463 container died a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:25:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ee3c60df18fed0b31cd3ad6f6ab05830882276f8ab7cfff6a16b6da026e7c3ef-merged.mount: Deactivated successfully.
Nov 22 00:25:09 np0005531754 podman[83785]: 2025-11-22 05:25:09.930503284 +0000 UTC m=+0.076815617 container remove a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:25:09 np0005531754 systemd[1]: libpod-conmon-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope: Deactivated successfully.
Nov 22 00:25:09 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:10 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:10 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: Deploying daemon crash.compute-0 on compute-0
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1937902109' entity='client.admin' 
Nov 22 00:25:10 np0005531754 ansible-async_wrapper.py[81791]: Done in kid B.
Nov 22 00:25:10 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:10 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:10 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:10 np0005531754 python3[83865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:10 np0005531754 podman[83904]: 2025-11-22 05:25:10.487800561 +0000 UTC m=+0.061733872 container create de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:25:10 np0005531754 podman[83904]: 2025-11-22 05:25:10.45979858 +0000 UTC m=+0.033731971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:10 np0005531754 systemd[1]: Started libpod-conmon-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope.
Nov 22 00:25:10 np0005531754 systemd[1]: Starting Ceph crash.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 podman[83904]: 2025-11-22 05:25:10.60544187 +0000 UTC m=+0.179375221 container init de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:10 np0005531754 podman[83904]: 2025-11-22 05:25:10.614084588 +0000 UTC m=+0.188017889 container start de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:25:10 np0005531754 podman[83904]: 2025-11-22 05:25:10.617850381 +0000 UTC m=+0.191783732 container attach de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:25:10 np0005531754 podman[83975]: 2025-11-22 05:25:10.846795976 +0000 UTC m=+0.048034143 container create c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:10 np0005531754 podman[83975]: 2025-11-22 05:25:10.913789491 +0000 UTC m=+0.115027678 container init c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:25:10 np0005531754 podman[83975]: 2025-11-22 05:25:10.823552656 +0000 UTC m=+0.024790843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:10 np0005531754 podman[83975]: 2025-11-22 05:25:10.924824525 +0000 UTC m=+0.126062692 container start c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:10 np0005531754 bash[83975]: c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708
Nov 22 00:25:10 np0005531754 systemd[1]: Started Ceph crash.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1))
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d7b9a17f-2291-470e-9487-19ad1ed48200 does not exist
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2))
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 22 00:25:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809843038' entity='client.admin' 
Nov 22 00:25:11 np0005531754 systemd[1]: libpod-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope: Deactivated successfully.
Nov 22 00:25:11 np0005531754 podman[83904]: 2025-11-22 05:25:11.233121215 +0000 UTC m=+0.807054516 container died de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:25:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5-merged.mount: Deactivated successfully.
Nov 22 00:25:11 np0005531754 podman[83904]: 2025-11-22 05:25:11.285416555 +0000 UTC m=+0.859349856 container remove de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:11 np0005531754 systemd[1]: libpod-conmon-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope: Deactivated successfully.
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.408+0000 7f4e9a568640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.408+0000 7f4e9a568640 -1 AuthRegistry(0x7f4e94066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.409+0000 7f4e9a568640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.409+0000 7f4e9a568640 -1 AuthRegistry(0x7f4e9a567000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.410+0000 7f4e93fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.410+0000 7f4e9a568640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 22 00:25:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 22 00:25:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:11 np0005531754 python3[84166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:11 np0005531754 podman[84189]: 2025-11-22 05:25:11.764674863 +0000 UTC m=+0.080399105 container create 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:25:11 np0005531754 systemd[1]: Started libpod-conmon-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope.
Nov 22 00:25:11 np0005531754 podman[84189]: 2025-11-22 05:25:11.727695604 +0000 UTC m=+0.043419906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.853064917 +0000 UTC m=+0.065245668 container create e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:11 np0005531754 podman[84189]: 2025-11-22 05:25:11.861756267 +0000 UTC m=+0.177480489 container init 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:11 np0005531754 podman[84189]: 2025-11-22 05:25:11.871202137 +0000 UTC m=+0.186926349 container start 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:11 np0005531754 podman[84189]: 2025-11-22 05:25:11.876380719 +0000 UTC m=+0.192104931 container attach 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:25:11 np0005531754 systemd[1]: Started libpod-conmon-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope.
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.82519927 +0000 UTC m=+0.037380021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.960040893 +0000 UTC m=+0.172221634 container init e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.967156889 +0000 UTC m=+0.179337630 container start e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.971242821 +0000 UTC m=+0.183423562 container attach e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:25:11 np0005531754 determined_wilson[84240]: 167 167
Nov 22 00:25:11 np0005531754 systemd[1]: libpod-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope: Deactivated successfully.
Nov 22 00:25:11 np0005531754 conmon[84240]: conmon e975d9bf3dd3943f847e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope/container/memory.events
Nov 22 00:25:11 np0005531754 podman[84218]: 2025-11-22 05:25:11.974272625 +0000 UTC m=+0.186453366 container died e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:25:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-aa0eead779fc8403c2542ab46a873511fad7f8774a3e4ff45c07d4c023111de7-merged.mount: Deactivated successfully.
Nov 22 00:25:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:12 np0005531754 podman[84218]: 2025-11-22 05:25:12.028399746 +0000 UTC m=+0.240580467 container remove e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:12 np0005531754 systemd[1]: libpod-conmon-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope: Deactivated successfully.
Nov 22 00:25:12 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:12 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:12 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:12 np0005531754 ceph-mon[75840]: Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 00:25:12 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1809843038' entity='client.admin' 
Nov 22 00:25:12 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 22 00:25:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 00:25:12 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:12 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:12 np0005531754 systemd[1]: Starting Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:12 np0005531754 podman[84402]: 2025-11-22 05:25:12.988317039 +0000 UTC m=+0.065991808 container create 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/lib/ceph/mgr/ceph-compute-0.okewxb supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:13 np0005531754 podman[84402]: 2025-11-22 05:25:12.957355667 +0000 UTC m=+0.035030506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:13 np0005531754 podman[84402]: 2025-11-22 05:25:13.059371726 +0000 UTC m=+0.137046475 container init 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:25:13 np0005531754 podman[84402]: 2025-11-22 05:25:13.070169174 +0000 UTC m=+0.147843943 container start 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:25:13 np0005531754 bash[84402]: 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6
Nov 22 00:25:13 np0005531754 systemd[1]: Started Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: pidfile_write: ignore empty --pid-file
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2))
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 22 00:25:13 np0005531754 flamboyant_euler[84231]: set require_min_compat_client to mimic
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'alerts'
Nov 22 00:25:13 np0005531754 systemd[1]: libpod-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope: Deactivated successfully.
Nov 22 00:25:13 np0005531754 podman[84189]: 2025-11-22 05:25:13.262659314 +0000 UTC m=+1.578383516 container died 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:25:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68-merged.mount: Deactivated successfully.
Nov 22 00:25:13 np0005531754 podman[84189]: 2025-11-22 05:25:13.308760154 +0000 UTC m=+1.624484356 container remove 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:13 np0005531754 systemd[1]: libpod-conmon-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope: Deactivated successfully.
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'balancer'
Nov 22 00:25:13 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:13.560+0000 7f6cc6f38140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 2 completed events
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:25:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:25:13 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'cephadm'
Nov 22 00:25:13 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:13.823+0000 7f6cc6f38140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 00:25:13 np0005531754 python3[84633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:14 np0005531754 podman[84674]: 2025-11-22 05:25:14.056375372 +0000 UTC m=+0.095304305 container create abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:14 np0005531754 podman[84674]: 2025-11-22 05:25:14.005656635 +0000 UTC m=+0.044585568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:14 np0005531754 systemd[1]: Started libpod-conmon-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope.
Nov 22 00:25:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:14 np0005531754 podman[84674]: 2025-11-22 05:25:14.175627756 +0000 UTC m=+0.214556679 container init abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:25:14 np0005531754 podman[84674]: 2025-11-22 05:25:14.183616166 +0000 UTC m=+0.222545089 container start abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:25:14 np0005531754 podman[84674]: 2025-11-22 05:25:14.20951638 +0000 UTC m=+0.248445303 container attach abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:14 np0005531754 podman[84716]: 2025-11-22 05:25:14.236726069 +0000 UTC m=+0.055771337 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 podman[84716]: 2025-11-22 05:25:14.360007704 +0000 UTC m=+0.179052962 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 461bd35b-0122-40d8-a4b2-3bf20812d1e6 does not exist
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6138f3e6-9db8-482c-b0ad-466b76d84df1 does not exist
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5dffe940-c2f7-4585-9346-d7bce402b49d does not exist
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 00:25:14 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 objective_brahmagupta[84709]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 00:25:15 np0005531754 objective_brahmagupta[84709]: Scheduled mon update...
Nov 22 00:25:15 np0005531754 objective_brahmagupta[84709]: Scheduled mgr update...
Nov 22 00:25:15 np0005531754 objective_brahmagupta[84709]: Scheduled osd.default_drive_group update...
Nov 22 00:25:15 np0005531754 systemd[1]: libpod-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope: Deactivated successfully.
Nov 22 00:25:15 np0005531754 podman[84674]: 2025-11-22 05:25:15.458465534 +0000 UTC m=+1.497394447 container died abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f-merged.mount: Deactivated successfully.
Nov 22 00:25:15 np0005531754 podman[84674]: 2025-11-22 05:25:15.517604012 +0000 UTC m=+1.556532935 container remove abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:15 np0005531754 systemd[1]: libpod-conmon-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope: Deactivated successfully.
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.555775183 +0000 UTC m=+0.072508038 container create f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:15 np0005531754 systemd[1]: Started libpod-conmon-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope.
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.525614433 +0000 UTC m=+0.042347288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.640600899 +0000 UTC m=+0.157333814 container init f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.656088145 +0000 UTC m=+0.172820991 container start f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:25:15 np0005531754 optimistic_lovelace[85146]: 167 167
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.660707552 +0000 UTC m=+0.177440457 container attach f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:15 np0005531754 systemd[1]: libpod-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope: Deactivated successfully.
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.66207771 +0000 UTC m=+0.178810575 container died f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-dfcdd8fb98dadc96c135f7fc7fbc880eea397af5219521ba7f70783e06176f03-merged.mount: Deactivated successfully.
Nov 22 00:25:15 np0005531754 podman[85122]: 2025-11-22 05:25:15.710497704 +0000 UTC m=+0.227230519 container remove f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:15 np0005531754 systemd[1]: libpod-conmon-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope: Deactivated successfully.
Nov 22 00:25:15 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'crash'
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 00:25:15 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 00:25:16 np0005531754 python3[85215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:16 np0005531754 ceph-mgr[84421]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:25:16 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'dashboard'
Nov 22 00:25:16 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:16.044+0000 7f6cc6f38140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.070700563 +0000 UTC m=+0.050134962 container create 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:25:16 np0005531754 systemd[1]: Started libpod-conmon-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope.
Nov 22 00:25:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.05096417 +0000 UTC m=+0.030398609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.160036823 +0000 UTC m=+0.139471222 container init 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.167971551 +0000 UTC m=+0.147405940 container start 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.171693584 +0000 UTC m=+0.151128203 container attach 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.329036147 +0000 UTC m=+0.057864255 container create 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 00:25:16 np0005531754 systemd[1]: Started libpod-conmon-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope.
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.302148486 +0000 UTC m=+0.030976594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.414377087 +0000 UTC m=+0.143205185 container init 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.42102206 +0000 UTC m=+0.149850128 container start 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:16 np0005531754 objective_lovelace[85348]: 167 167
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.424233678 +0000 UTC m=+0.153061756 container attach 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:16 np0005531754 systemd[1]: libpod-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope: Deactivated successfully.
Nov 22 00:25:16 np0005531754 conmon[85348]: conmon 7cd621ba9bed9821618a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope/container/memory.events
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.425555335 +0000 UTC m=+0.154383403 container died 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:16 np0005531754 podman[85331]: 2025-11-22 05:25:16.465276759 +0000 UTC m=+0.194104827 container remove 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 00:25:16 np0005531754 systemd[1]: libpod-conmon-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope: Deactivated successfully.
Nov 22 00:25:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-faf45bc56fee1a717a6ebfa86ff8bbb8b2a3246e475600dcf96143f1a5dc69a4-merged.mount: Deactivated successfully.
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 00:25:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751463261' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 00:25:16 np0005531754 zealous_mcnulty[85310]: 
Nov 22 00:25:16 np0005531754 zealous_mcnulty[85310]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-22T05:23:54.066984+0000","services":{}},"progress_events":{}}
Nov 22 00:25:16 np0005531754 systemd[1]: libpod-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope: Deactivated successfully.
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.820293675 +0000 UTC m=+0.799728094 container died 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:25:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5-merged.mount: Deactivated successfully.
Nov 22 00:25:16 np0005531754 podman[85268]: 2025-11-22 05:25:16.866060696 +0000 UTC m=+0.845495085 container remove 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:16 np0005531754 systemd[1]: libpod-conmon-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope: Deactivated successfully.
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Added host compute-0
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Saving service mon spec with placement compute-0
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Saving service mgr spec with placement compute-0
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Saving service osd.default_drive_group spec with placement compute-0
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 podman[85570]: 2025-11-22 05:25:17.291624295 +0000 UTC m=+0.049683580 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:25:17 np0005531754 podman[85570]: 2025-11-22 05:25:17.411064864 +0000 UTC m=+0.169124149 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:17 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'devicehealth'
Nov 22 00:25:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mgr[84421]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:25:17 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 00:25:17 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:17.775+0000 7f6cc6f38140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ece46ffd-0618-4823-99e7-639fda0645c6 does not exist
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 00:25:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:17 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1))
Nov 22 00:25:17 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 00:25:17 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 00:25:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 00:25:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 00:25:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]:  from numpy import show_config as show_numpy_config
Nov 22 00:25:18 np0005531754 ceph-mgr[84421]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:25:18 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'influx'
Nov 22 00:25:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:18.310+0000 7f6cc6f38140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 00:25:18 np0005531754 systemd[1]: Stopping Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:18 np0005531754 ceph-mgr[84421]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:25:18 np0005531754 ceph-mgr[84421]: mgr[py] Loading python module 'insights'
Nov 22 00:25:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:18.539+0000 7f6cc6f38140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 00:25:18 np0005531754 podman[85821]: 2025-11-22 05:25:18.655182965 +0000 UTC m=+0.143468573 container died 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:25:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6-merged.mount: Deactivated successfully.
Nov 22 00:25:18 np0005531754 podman[85821]: 2025-11-22 05:25:18.730071707 +0000 UTC m=+0.218357305 container remove 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:18 np0005531754 bash[85821]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb
Nov 22 00:25:18 np0005531754 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Main process exited, code=exited, status=143/n/a
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:18 np0005531754 ceph-mon[75840]: Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 00:25:18 np0005531754 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Failed with result 'exit-code'.
Nov 22 00:25:18 np0005531754 systemd[1]: Stopped Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:18 np0005531754 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Consumed 6.502s CPU time.
Nov 22 00:25:18 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:19 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:19 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.okewxb
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.okewxb
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"} v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]: dispatch
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]': finished
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1))
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ca36ffb-4c3f-4fda-8720-34be37d72804 does not exist
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: Removing key for mgr.compute-0.okewxb
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]: dispatch
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]': finished
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:25:19 np0005531754 podman[86058]: 2025-11-22 05:25:19.968073289 +0000 UTC m=+0.060918918 container create a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:20 np0005531754 systemd[1]: Started libpod-conmon-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope.
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:19.946046873 +0000 UTC m=+0.038892582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:20.066192052 +0000 UTC m=+0.159037691 container init a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:20.075746104 +0000 UTC m=+0.168591743 container start a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:20.079494007 +0000 UTC m=+0.172339676 container attach a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:20 np0005531754 magical_yonath[86074]: 167 167
Nov 22 00:25:20 np0005531754 systemd[1]: libpod-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope: Deactivated successfully.
Nov 22 00:25:20 np0005531754 conmon[86074]: conmon a47f0a5da247be23b820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope/container/memory.events
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:20.083602241 +0000 UTC m=+0.176447870 container died a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:25:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-de41b1af5d169b68dde5312eabbe6b97d46704d984129965c06b9edc6b727d7a-merged.mount: Deactivated successfully.
Nov 22 00:25:20 np0005531754 podman[86058]: 2025-11-22 05:25:20.121175945 +0000 UTC m=+0.214021564 container remove a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:20 np0005531754 systemd[1]: libpod-conmon-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope: Deactivated successfully.
Nov 22 00:25:20 np0005531754 podman[86097]: 2025-11-22 05:25:20.296183235 +0000 UTC m=+0.044476746 container create 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:25:20 np0005531754 systemd[1]: Started libpod-conmon-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope.
Nov 22 00:25:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:20 np0005531754 podman[86097]: 2025-11-22 05:25:20.2774939 +0000 UTC m=+0.025787441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:20 np0005531754 podman[86097]: 2025-11-22 05:25:20.387183841 +0000 UTC m=+0.135477352 container init 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 00:25:20 np0005531754 podman[86097]: 2025-11-22 05:25:20.401197707 +0000 UTC m=+0.149491218 container start 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:25:20 np0005531754 podman[86097]: 2025-11-22 05:25:20.40530789 +0000 UTC m=+0.153601401 container attach 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:21 np0005531754 ecstatic_saha[86114]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:25:21 np0005531754 ecstatic_saha[86114]: --> relative data size: 1.0
Nov 22 00:25:21 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:21 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a5feb48b-30da-4436-abf9-8885d26e1de8
Nov 22 00:25:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"} v 0) v1
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]: dispatch
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]': finished
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:21 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:22 np0005531754 lvm[86176]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 00:25:22 np0005531754 lvm[86176]: VG ceph_vg0 finished
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 22 00:25:22 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]: dispatch
Nov 22 00:25:22 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]': finished
Nov 22 00:25:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 00:25:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843832395' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: stderr: got monmap epoch 1
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: --> Creating keyring file for osd.0
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 22 00:25:22 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a5feb48b-30da-4436-abf9-8885d26e1de8 --setuser ceph --setgroup ceph
Nov 22 00:25:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:23 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 3 completed events
Nov 22 00:25:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:25:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 00:25:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 00:25:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:24 np0005531754 ceph-mon[75840]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 00:25:24 np0005531754 ceph-mon[75840]: Cluster is now healthy
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1fb2d706-3ef2-43d5-9448-a482f97db695
Nov 22 00:25:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"} v 0) v1
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]: dispatch
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]': finished
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:25 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:25 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:25 np0005531754 lvm[87117]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 00:25:25 np0005531754 lvm[87117]: VG ceph_vg1 finished
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 22 00:25:25 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 22 00:25:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 00:25:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344868377' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: stderr: got monmap epoch 1
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: --> Creating keyring file for osd.1
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 22 00:25:26 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1fb2d706-3ef2-43d5-9448-a482f97db695 --setuser ceph --setgroup ceph
Nov 22 00:25:26 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]: dispatch
Nov 22 00:25:26 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]': finished
Nov 22 00:25:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:26.585+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 315eef4c-16c8-4117-80ec-ccdc45d85649
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"} v 0) v1
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]: dispatch
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]': finished
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:29 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:29 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:29 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:29 np0005531754 lvm[88060]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 00:25:29 np0005531754 lvm[88060]: VG ceph_vg2 finished
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]: dispatch
Nov 22 00:25:29 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]': finished
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:29 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 22 00:25:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 00:25:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174537781' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 00:25:30 np0005531754 ecstatic_saha[86114]: stderr: got monmap epoch 1
Nov 22 00:25:30 np0005531754 ecstatic_saha[86114]: --> Creating keyring file for osd.2
Nov 22 00:25:30 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 22 00:25:30 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 22 00:25:30 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 315eef4c-16c8-4117-80ec-ccdc45d85649 --setuser ceph --setgroup ceph
Nov 22 00:25:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: stderr: 2025-11-22T05:25:30.384+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 22 00:25:32 np0005531754 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 22 00:25:33 np0005531754 systemd[1]: libpod-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Deactivated successfully.
Nov 22 00:25:33 np0005531754 podman[86097]: 2025-11-22 05:25:33.040977401 +0000 UTC m=+12.789270932 container died 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:33 np0005531754 systemd[1]: libpod-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Consumed 6.505s CPU time.
Nov 22 00:25:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474-merged.mount: Deactivated successfully.
Nov 22 00:25:33 np0005531754 podman[86097]: 2025-11-22 05:25:33.114102703 +0000 UTC m=+12.862396214 container remove 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:33 np0005531754 systemd[1]: libpod-conmon-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Deactivated successfully.
Nov 22 00:25:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:33 np0005531754 podman[89123]: 2025-11-22 05:25:33.905397462 +0000 UTC m=+0.061678302 container create c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:25:33 np0005531754 systemd[1]: Started libpod-conmon-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope.
Nov 22 00:25:33 np0005531754 podman[89123]: 2025-11-22 05:25:33.881768478 +0000 UTC m=+0.038049328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:33 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:34 np0005531754 podman[89123]: 2025-11-22 05:25:33.999843893 +0000 UTC m=+0.156124793 container init c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 00:25:34 np0005531754 podman[89123]: 2025-11-22 05:25:34.012817031 +0000 UTC m=+0.169097841 container start c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:34 np0005531754 podman[89123]: 2025-11-22 05:25:34.016536039 +0000 UTC m=+0.172816949 container attach c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:25:34 np0005531754 vigilant_hermann[89139]: 167 167
Nov 22 00:25:34 np0005531754 systemd[1]: libpod-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope: Deactivated successfully.
Nov 22 00:25:34 np0005531754 conmon[89139]: conmon c8c08c02c2b201891c07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope/container/memory.events
Nov 22 00:25:34 np0005531754 podman[89123]: 2025-11-22 05:25:34.019656027 +0000 UTC m=+0.175936857 container died c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:25:34 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8897d407c1b10b2922263a45955096034b26325079e37ff8216bd61c0832b2da-merged.mount: Deactivated successfully.
Nov 22 00:25:34 np0005531754 podman[89123]: 2025-11-22 05:25:34.061871505 +0000 UTC m=+0.218152325 container remove c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:34 np0005531754 systemd[1]: libpod-conmon-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope: Deactivated successfully.
Nov 22 00:25:34 np0005531754 podman[89160]: 2025-11-22 05:25:34.274378242 +0000 UTC m=+0.070740197 container create 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 00:25:34 np0005531754 systemd[1]: Started libpod-conmon-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope.
Nov 22 00:25:34 np0005531754 podman[89160]: 2025-11-22 05:25:34.245121901 +0000 UTC m=+0.041483886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:34 np0005531754 podman[89160]: 2025-11-22 05:25:34.381466142 +0000 UTC m=+0.177828157 container init 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:34 np0005531754 podman[89160]: 2025-11-22 05:25:34.396081711 +0000 UTC m=+0.192443666 container start 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:25:34 np0005531754 podman[89160]: 2025-11-22 05:25:34.400291714 +0000 UTC m=+0.196653729 container attach 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]: {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    "0": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "devices": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "/dev/loop3"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            ],
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_name": "ceph_lv0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_size": "21470642176",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "name": "ceph_lv0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "tags": {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_name": "ceph",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.crush_device_class": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.encrypted": "0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_id": "0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.vdo": "0"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            },
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "vg_name": "ceph_vg0"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        }
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    ],
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    "1": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "devices": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "/dev/loop4"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            ],
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_name": "ceph_lv1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_size": "21470642176",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "name": "ceph_lv1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "tags": {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_name": "ceph",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.crush_device_class": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.encrypted": "0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_id": "1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.vdo": "0"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            },
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "vg_name": "ceph_vg1"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        }
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    ],
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    "2": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "devices": [
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "/dev/loop5"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            ],
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_name": "ceph_lv2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_size": "21470642176",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "name": "ceph_lv2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "tags": {
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.cluster_name": "ceph",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.crush_device_class": "",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.encrypted": "0",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osd_id": "2",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:                "ceph.vdo": "0"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            },
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "type": "block",
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:            "vg_name": "ceph_vg2"
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:        }
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]:    ]
Nov 22 00:25:35 np0005531754 blissful_gauss[89176]: }
Nov 22 00:25:35 np0005531754 systemd[1]: libpod-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope: Deactivated successfully.
Nov 22 00:25:35 np0005531754 podman[89160]: 2025-11-22 05:25:35.207770552 +0000 UTC m=+1.004132557 container died 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d-merged.mount: Deactivated successfully.
Nov 22 00:25:35 np0005531754 podman[89160]: 2025-11-22 05:25:35.293212271 +0000 UTC m=+1.089574196 container remove 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:25:35 np0005531754 systemd[1]: libpod-conmon-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope: Deactivated successfully.
Nov 22 00:25:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 22 00:25:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 00:25:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:35 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 22 00:25:35 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 22 00:25:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.125657084 +0000 UTC m=+0.061477176 container create a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:36 np0005531754 systemd[1]: Started libpod-conmon-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope.
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.094456472 +0000 UTC m=+0.030276624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:36 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.215936915 +0000 UTC m=+0.151757057 container init a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.224070921 +0000 UTC m=+0.159890973 container start a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.227764237 +0000 UTC m=+0.163584329 container attach a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:25:36 np0005531754 silly_clarke[89353]: 167 167
Nov 22 00:25:36 np0005531754 systemd[1]: libpod-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope: Deactivated successfully.
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.230033329 +0000 UTC m=+0.165853371 container died a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:36 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5f971d4164a4ab3e48fa2402aa8a9ebbd7205abe12c8828339e281f91b55a686-merged.mount: Deactivated successfully.
Nov 22 00:25:36 np0005531754 podman[89337]: 2025-11-22 05:25:36.267116915 +0000 UTC m=+0.202936977 container remove a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:25:36 np0005531754 systemd[1]: libpod-conmon-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope: Deactivated successfully.
Nov 22 00:25:36 np0005531754 podman[89384]: 2025-11-22 05:25:36.610927914 +0000 UTC m=+0.060636569 container create 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:36 np0005531754 systemd[1]: Started libpod-conmon-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope.
Nov 22 00:25:36 np0005531754 podman[89384]: 2025-11-22 05:25:36.592282677 +0000 UTC m=+0.041991342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:36 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:36 np0005531754 podman[89384]: 2025-11-22 05:25:36.715602718 +0000 UTC m=+0.165311383 container init 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:25:36 np0005531754 podman[89384]: 2025-11-22 05:25:36.733060267 +0000 UTC m=+0.182768922 container start 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:25:36 np0005531754 podman[89384]: 2025-11-22 05:25:36.737499736 +0000 UTC m=+0.187208391 container attach 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:36 np0005531754 ceph-mon[75840]: Deploying daemon osd.0 on compute-0
Nov 22 00:25:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:37 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 00:25:37 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]:                            [--no-systemd] [--no-tmpfs]
Nov 22 00:25:37 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 00:25:37 np0005531754 systemd[1]: libpod-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope: Deactivated successfully.
Nov 22 00:25:37 np0005531754 podman[89384]: 2025-11-22 05:25:37.370227386 +0000 UTC m=+0.819936031 container died 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8-merged.mount: Deactivated successfully.
Nov 22 00:25:37 np0005531754 podman[89384]: 2025-11-22 05:25:37.447833428 +0000 UTC m=+0.897542073 container remove 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:25:37 np0005531754 systemd[1]: libpod-conmon-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope: Deactivated successfully.
Nov 22 00:25:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:37 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:37 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:37 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:38 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:38 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:38 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:38 np0005531754 systemd[1]: Starting Ceph osd.0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:38 np0005531754 podman[89564]: 2025-11-22 05:25:38.64411681 +0000 UTC m=+0.064436859 container create d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:25:38 np0005531754 podman[89564]: 2025-11-22 05:25:38.615273862 +0000 UTC m=+0.035593911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:38 np0005531754 podman[89564]: 2025-11-22 05:25:38.743790236 +0000 UTC m=+0.164110335 container init d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:38 np0005531754 podman[89564]: 2025-11-22 05:25:38.754991088 +0000 UTC m=+0.175311127 container start d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 00:25:38 np0005531754 podman[89564]: 2025-11-22 05:25:38.758953913 +0000 UTC m=+0.179274012 container attach d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:39 np0005531754 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 00:25:40 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 00:25:40 np0005531754 bash[89564]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 00:25:40 np0005531754 systemd[1]: libpod-d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048.scope: Deactivated successfully.
Nov 22 00:25:40 np0005531754 systemd[1]: libpod-d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048.scope: Consumed 1.293s CPU time.
Nov 22 00:25:40 np0005531754 podman[89564]: 2025-11-22 05:25:40.034090417 +0000 UTC m=+1.454410506 container died d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 00:25:40 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863-merged.mount: Deactivated successfully.
Nov 22 00:25:40 np0005531754 podman[89564]: 2025-11-22 05:25:40.098791883 +0000 UTC m=+1.519111912 container remove d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:25:40 np0005531754 podman[89760]: 2025-11-22 05:25:40.400562229 +0000 UTC m=+0.066981429 container create 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:25:40 np0005531754 podman[89760]: 2025-11-22 05:25:40.37263248 +0000 UTC m=+0.039051690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:40 np0005531754 podman[89760]: 2025-11-22 05:25:40.484184769 +0000 UTC m=+0.150603999 container init 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:25:40 np0005531754 podman[89760]: 2025-11-22 05:25:40.495462594 +0000 UTC m=+0.161881794 container start 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:25:40 np0005531754 bash[89760]: 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3
Nov 22 00:25:40 np0005531754 systemd[1]: Started Ceph osd.0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: pidfile_write: ignore empty --pid-file
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:40 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 22 00:25:40 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 22 00:25:40 np0005531754 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: load: jerasure load: lrc 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.32222719 +0000 UTC m=+0.053945859 container create 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:25:41 np0005531754 systemd[1]: Started libpod-conmon-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope.
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.294251889 +0000 UTC m=+0.025970558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:41 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.437660701 +0000 UTC m=+0.169379440 container init 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.449302448 +0000 UTC m=+0.181021107 container start 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.453057946 +0000 UTC m=+0.184776615 container attach 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:41 np0005531754 ecstatic_chatterjee[89959]: 167 167
Nov 22 00:25:41 np0005531754 systemd[1]: libpod-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope: Deactivated successfully.
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.459621913 +0000 UTC m=+0.191340582 container died 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a6a79437b7770937595ea32d63bb7da1238bbc51ceea95732d691e6e52ca0f62-merged.mount: Deactivated successfully.
Nov 22 00:25:41 np0005531754 podman[89939]: 2025-11-22 05:25:41.508916634 +0000 UTC m=+0.240635303 container remove 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:41 np0005531754 systemd[1]: libpod-conmon-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope: Deactivated successfully.
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs mount
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs mount shared_bdev_used = 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Git sha 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DB SUMMARY
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DB Session ID:  CK3ECG8VCYUAVQEDRRWE
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                     Options.env: 0x56464d1e7d50
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                Options.info_log: 0x56464c3e47e0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.write_buffer_manager: 0x56464d2f0460
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Compression algorithms supported:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3f0979e-ad59-462b-9b2c-5d2aa2e48d80
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141678458, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141678716, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: freelist init
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: freelist _read_cfg
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs umount
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 00:25:41 np0005531754 podman[90186]: 2025-11-22 05:25:41.881641962 +0000 UTC m=+0.066744651 container create 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 22 00:25:41 np0005531754 ceph-mon[75840]: Deploying daemon osd.1 on compute-0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs mount
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluefs mount shared_bdev_used = 4718592
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Git sha 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DB SUMMARY
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DB Session ID:  CK3ECG8VCYUAVQEDRRWF
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                     Options.env: 0x56464d3807e0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                Options.info_log: 0x56464c3e4540
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.write_buffer_manager: 0x56464d2f0460
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:41 np0005531754 systemd[1]: Started libpod-conmon-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope.
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Compression algorithms supported:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 podman[90186]: 2025-11-22 05:25:41.854452546 +0000 UTC m=+0.039555305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56464c3d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3f0979e-ad59-462b-9b2c-5d2aa2e48d80
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141967643, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141971810, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141975229, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141978575, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141980454, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:41 np0005531754 ceph-osd[89779]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 00:25:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:41 np0005531754 podman[90186]: 2025-11-22 05:25:41.992640745 +0000 UTC m=+0.177743504 container init 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:25:42 np0005531754 podman[90186]: 2025-11-22 05:25:42.005422367 +0000 UTC m=+0.190525026 container start 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56464c53e000
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: rocksdb: DB pointer 0x56464d2d9a00
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 22 00:25:42 np0005531754 podman[90186]: 2025-11-22 05:25:42.009386082 +0000 UTC m=+0.194488781 container attach 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usag
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: _get_class not permitted to load lua
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: _get_class not permitted to load sdk
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: _get_class not permitted to load test_remote_reads
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 load_pgs
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 load_pgs opened 0 pgs
Nov 22 00:25:42 np0005531754 ceph-osd[89779]: osd.0 0 log_to_monitors true
Nov 22 00:25:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:42.018+0000 7f35e4ed6740 -1 osd.0 0 log_to_monitors true
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 00:25:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]:                            [--no-systemd] [--no-tmpfs]
Nov 22 00:25:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 00:25:42 np0005531754 systemd[1]: libpod-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope: Deactivated successfully.
Nov 22 00:25:42 np0005531754 podman[90186]: 2025-11-22 05:25:42.627115619 +0000 UTC m=+0.812218308 container died 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:42 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530-merged.mount: Deactivated successfully.
Nov 22 00:25:42 np0005531754 podman[90186]: 2025-11-22 05:25:42.709588544 +0000 UTC m=+0.894691213 container remove 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:42 np0005531754 systemd[1]: libpod-conmon-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope: Deactivated successfully.
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:42 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:42 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:42 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:42 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 00:25:43 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:43 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:43 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:43 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:43 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:43 np0005531754 systemd[1]: Starting Ceph osd.1 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:25:43
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 done with init, starting boot process
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 start_boot
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 00:25:43 np0005531754 ceph-osd[89779]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:43 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 00:25:43 np0005531754 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:43 np0005531754 podman[90575]: 2025-11-22 05:25:43.975439676 +0000 UTC m=+0.090119708 container create cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 00:25:44 np0005531754 podman[90575]: 2025-11-22 05:25:43.934831358 +0000 UTC m=+0.049511440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:44 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:44 np0005531754 podman[90575]: 2025-11-22 05:25:44.079683965 +0000 UTC m=+0.194363987 container init cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:25:44 np0005531754 podman[90575]: 2025-11-22 05:25:44.09063938 +0000 UTC m=+0.205319382 container start cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:44 np0005531754 podman[90575]: 2025-11-22 05:25:44.11766385 +0000 UTC m=+0.232343872 container attach cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:25:44 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:44 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:44 np0005531754 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:45 np0005531754 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 00:25:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 00:25:45 np0005531754 bash[90575]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 00:25:45 np0005531754 systemd[1]: libpod-cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11.scope: Deactivated successfully.
Nov 22 00:25:45 np0005531754 podman[90575]: 2025-11-22 05:25:45.254147031 +0000 UTC m=+1.368827053 container died cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:45 np0005531754 systemd[1]: libpod-cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11.scope: Consumed 1.180s CPU time.
Nov 22 00:25:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54-merged.mount: Deactivated successfully.
Nov 22 00:25:45 np0005531754 podman[90575]: 2025-11-22 05:25:45.378097702 +0000 UTC m=+1.492777744 container remove cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:25:45 np0005531754 podman[90765]: 2025-11-22 05:25:45.652368431 +0000 UTC m=+0.054118563 container create 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:25:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:45 np0005531754 podman[90765]: 2025-11-22 05:25:45.627373796 +0000 UTC m=+0.029123958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:45 np0005531754 podman[90765]: 2025-11-22 05:25:45.766680188 +0000 UTC m=+0.168430351 container init 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:25:45 np0005531754 podman[90765]: 2025-11-22 05:25:45.777920192 +0000 UTC m=+0.179670314 container start 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: pidfile_write: ignore empty --pid-file
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 00:25:45 np0005531754 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 00:25:45 np0005531754 bash[90765]: 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01
Nov 22 00:25:45 np0005531754 systemd[1]: Started Ceph osd.1 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:45 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:45 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:46 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 22 00:25:46 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: load: jerasure load: lrc 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 00:25:46 np0005531754 podman[90947]: 2025-11-22 05:25:46.860694392 +0000 UTC m=+0.098774888 container create cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:46 np0005531754 podman[90947]: 2025-11-22 05:25:46.802262045 +0000 UTC m=+0.040342591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluefs mount
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluefs mount shared_bdev_used = 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Git sha 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: DB SUMMARY
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: DB Session ID:  0I8ZSKYF4TFY47RR8FK5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                                     Options.env: 0x55e99eadfc70
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                                Options.info_log: 0x55e99dcda8a0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.write_buffer_manager: 0x55e99ebe6460
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Compression algorithms supported:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56f38230-0c37-49fb-a62a-cda82e58aaf5
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789146953221, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789146953620, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: freelist init
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: freelist _read_cfg
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bluefs umount
Nov 22 00:25:46 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 00:25:46 np0005531754 systemd[1]: Started libpod-conmon-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope.
Nov 22 00:25:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:47 np0005531754 podman[90947]: 2025-11-22 05:25:47.120773856 +0000 UTC m=+0.358854342 container init cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:47 np0005531754 podman[90947]: 2025-11-22 05:25:47.134722105 +0000 UTC m=+0.372802571 container start cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:25:47 np0005531754 trusting_murdock[91158]: 167 167
Nov 22 00:25:47 np0005531754 systemd[1]: libpod-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope: Deactivated successfully.
Nov 22 00:25:47 np0005531754 conmon[91158]: conmon cee90b474632913a3330 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope/container/memory.events
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: Deploying daemon osd.2 on compute-0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluefs mount
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluefs mount shared_bdev_used = 4718592
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Git sha 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: DB SUMMARY
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: DB Session ID:  0I8ZSKYF4TFY47RR8FK4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                                     Options.env: 0x55e99ec8e460
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                                Options.info_log: 0x55e99dcda620
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.write_buffer_manager: 0x55e99ebe6460
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Compression algorithms supported:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e99dcc7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56f38230-0c37-49fb-a62a-cda82e58aaf5
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147221890, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:47 np0005531754 python3[91185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:25:47 np0005531754 podman[90947]: 2025-11-22 05:25:47.261491715 +0000 UTC m=+0.499572201 container attach cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:25:47 np0005531754 podman[90947]: 2025-11-22 05:25:47.262041782 +0000 UTC m=+0.500122268 container died cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147276384, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147369074, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147376637, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147391282, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 00:25:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d3efe335e144baf4c5adb77d92d25728036f875d1131c7188f58074164177fcb-merged.mount: Deactivated successfully.
Nov 22 00:25:47 np0005531754 podman[90947]: 2025-11-22 05:25:47.42819494 +0000 UTC m=+0.666275396 container remove cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:25:47 np0005531754 podman[91384]: 2025-11-22 05:25:47.451112261 +0000 UTC m=+0.198023472 container create 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e99de34000
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: DB pointer 0x55e99ebcfa00
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usag
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: _get_class not permitted to load lua
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: _get_class not permitted to load sdk
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: _get_class not permitted to load test_remote_reads
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 load_pgs
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 load_pgs opened 0 pgs
Nov 22 00:25:47 np0005531754 ceph-osd[90784]: osd.1 0 log_to_monitors true
Nov 22 00:25:47 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:47.462+0000 7fd69825f740 -1 osd.1 0 log_to_monitors true
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 00:25:47 np0005531754 systemd[1]: Started libpod-conmon-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope.
Nov 22 00:25:47 np0005531754 podman[91384]: 2025-11-22 05:25:47.391131403 +0000 UTC m=+0.138042694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:25:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 systemd[1]: libpod-conmon-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope: Deactivated successfully.
Nov 22 00:25:47 np0005531754 podman[91384]: 2025-11-22 05:25:47.547197105 +0000 UTC m=+0.294108326 container init 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:47 np0005531754 podman[91384]: 2025-11-22 05:25:47.553946097 +0000 UTC m=+0.300857308 container start 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:47 np0005531754 podman[91384]: 2025-11-22 05:25:47.567647918 +0000 UTC m=+0.314559129 container attach 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:25:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:47 np0005531754 podman[91452]: 2025-11-22 05:25:47.777616254 +0000 UTC m=+0.076901610 container create 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:47 np0005531754 podman[91452]: 2025-11-22 05:25:47.736716368 +0000 UTC m=+0.036001784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:47 np0005531754 systemd[1]: Started libpod-conmon-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope.
Nov 22 00:25:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:47 np0005531754 podman[91452]: 2025-11-22 05:25:47.882638139 +0000 UTC m=+0.181923485 container init 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:25:47 np0005531754 podman[91452]: 2025-11-22 05:25:47.891654323 +0000 UTC m=+0.190939679 container start 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:47 np0005531754 podman[91452]: 2025-11-22 05:25:47.898103336 +0000 UTC m=+0.197388682 container attach 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:25:47 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:47 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887174337' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 00:25:48 np0005531754 jovial_goodall[91437]: 
Nov 22 00:25:48 np0005531754 jovial_goodall[91437]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":111,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T05:25:45.657644+0000","services":{}},"progress_events":{}}
Nov 22 00:25:48 np0005531754 systemd[1]: libpod-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope: Deactivated successfully.
Nov 22 00:25:48 np0005531754 podman[91384]: 2025-11-22 05:25:48.219074726 +0000 UTC m=+0.965985937 container died 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4-merged.mount: Deactivated successfully.
Nov 22 00:25:48 np0005531754 podman[91384]: 2025-11-22 05:25:48.268954795 +0000 UTC m=+1.015866006 container remove 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:25:48 np0005531754 systemd[1]: libpod-conmon-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope: Deactivated successfully.
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.553 iops: 8333.664 elapsed_sec: 0.360
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: log_channel(cluster) log [WRN] : OSD bench result of 8333.664434 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 0 waiting for initial osdmap
Nov 22 00:25:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:48.297+0000 7f35e166d640 -1 osd.0 0 waiting for initial osdmap
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 22 00:25:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:48.322+0000 7f35dc47e640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:25:48 np0005531754 ceph-osd[89779]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 22 00:25:48 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 00:25:48 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 00:25:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 00:25:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]:                            [--no-systemd] [--no-tmpfs]
Nov 22 00:25:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 00:25:48 np0005531754 systemd[1]: libpod-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope: Deactivated successfully.
Nov 22 00:25:48 np0005531754 podman[91452]: 2025-11-22 05:25:48.545198388 +0000 UTC m=+0.844483754 container died 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:25:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916-merged.mount: Deactivated successfully.
Nov 22 00:25:48 np0005531754 podman[91452]: 2025-11-22 05:25:48.636081316 +0000 UTC m=+0.935366632 container remove 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:25:48 np0005531754 systemd[1]: libpod-conmon-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope: Deactivated successfully.
Nov 22 00:25:48 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:48 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:48 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 00:25:48 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:48 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 done with init, starting boot process
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 start_boot
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 00:25:49 np0005531754 ceph-osd[90784]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453] boot
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:49 np0005531754 ceph-osd[89779]: osd.0 10 state: booting -> active
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: OSD bench result of 8333.664434 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:49 np0005531754 systemd[1]: Reloading.
Nov 22 00:25:49 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:25:49 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:25:49 np0005531754 systemd[1]: Starting Ceph osd.2 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 00:25:49 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] creating mgr pool
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 22 00:25:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 00:25:49 np0005531754 podman[91665]: 2025-11-22 05:25:49.865188382 +0000 UTC m=+0.097541819 container create b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:25:49 np0005531754 podman[91665]: 2025-11-22 05:25:49.809015095 +0000 UTC m=+0.041368572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:49 np0005531754 podman[91665]: 2025-11-22 05:25:49.992400675 +0000 UTC m=+0.224754072 container init b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:25:50 np0005531754 podman[91665]: 2025-11-22 05:25:50.000238082 +0000 UTC m=+0.232591509 container start b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:50 np0005531754 podman[91665]: 2025-11-22 05:25:50.014632585 +0000 UTC m=+0.246986022 container attach b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:50 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:50 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453] boot
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:50 np0005531754 ceph-osd[89779]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 00:25:50 np0005531754 ceph-osd[89779]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 22 00:25:50 np0005531754 ceph-osd[89779]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:50 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 22 00:25:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 00:25:50 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:51 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:51 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:51 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:51 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:51 np0005531754 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 00:25:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 00:25:51 np0005531754 bash[91665]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 00:25:51 np0005531754 systemd[1]: libpod-b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e.scope: Deactivated successfully.
Nov 22 00:25:51 np0005531754 podman[91665]: 2025-11-22 05:25:51.524933618 +0000 UTC m=+1.757287035 container died b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:25:51 np0005531754 systemd[1]: libpod-b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e.scope: Consumed 1.532s CPU time.
Nov 22 00:25:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c-merged.mount: Deactivated successfully.
Nov 22 00:25:51 np0005531754 podman[91665]: 2025-11-22 05:25:51.63909794 +0000 UTC m=+1.871451347 container remove b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:25:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 00:25:51 np0005531754 podman[91862]: 2025-11-22 05:25:51.936018642 +0000 UTC m=+0.079792411 container create 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:51 np0005531754 podman[91862]: 2025-11-22 05:25:51.89493132 +0000 UTC m=+0.038705139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:52 np0005531754 podman[91862]: 2025-11-22 05:25:52.030157836 +0000 UTC m=+0.173931605 container init 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:52 np0005531754 podman[91862]: 2025-11-22 05:25:52.04779097 +0000 UTC m=+0.191564729 container start 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:25:52 np0005531754 bash[91862]: 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb
Nov 22 00:25:52 np0005531754 systemd[1]: Started Ceph osd.2 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: pidfile_write: ignore empty --pid-file
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:52 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:52 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: load: jerasure load: lrc 
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:52 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 00:25:52 np0005531754 podman[92040]: 2025-11-22 05:25:52.996688328 +0000 UTC m=+0.095436414 container create 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:52.940669776 +0000 UTC m=+0.039417912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:53 np0005531754 systemd[1]: Started libpod-conmon-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope.
Nov 22 00:25:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:53.143062233 +0000 UTC m=+0.241810309 container init 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:53.152328265 +0000 UTC m=+0.251076351 container start 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:53 np0005531754 eloquent_moore[92060]: 167 167
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:53.159570153 +0000 UTC m=+0.258318229 container attach 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:25:53 np0005531754 systemd[1]: libpod-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope: Deactivated successfully.
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:53.162972911 +0000 UTC m=+0.261720967 container died 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-05264c2ba5da3af8d578e4da997979f4f65ef7c4379dadd836dcf985522454c4-merged.mount: Deactivated successfully.
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs mount
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs mount shared_bdev_used = 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Git sha 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DB SUMMARY
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DB Session ID:  2LZ21EPTPTNW4W1U2F4C
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                     Options.env: 0x55c27ae1dd50
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                Options.info_log: 0x55c27a01ea40
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.write_buffer_manager: 0x55c27af2e460
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Compression algorithms supported:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 podman[92040]: 2025-11-22 05:25:53.234274524 +0000 UTC m=+0.333022560 container remove 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 00d3a8ab-719a-4a16-94c2-99fe9381ec3c
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153240754, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153240995, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: freelist init
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: freelist _read_cfg
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs umount
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 00:25:53 np0005531754 systemd[1]: libpod-conmon-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope: Deactivated successfully.
Nov 22 00:25:53 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:53 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:53 np0005531754 podman[92276]: 2025-11-22 05:25:53.39051812 +0000 UTC m=+0.053863115 container create 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.910 iops: 8424.898 elapsed_sec: 0.356
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: log_channel(cluster) log [WRN] : OSD bench result of 8424.897601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:25:53 np0005531754 systemd[1]: Started libpod-conmon-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope.
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 0 waiting for initial osdmap
Nov 22 00:25:53 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:53.441+0000 7fd6949f6640 -1 osd.1 0 waiting for initial osdmap
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Nov 22 00:25:53 np0005531754 podman[92276]: 2025-11-22 05:25:53.363379867 +0000 UTC m=+0.026724892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 set_numa_affinity not setting numa affinity
Nov 22 00:25:53 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:53.466+0000 7fd68f807640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:25:53 np0005531754 ceph-osd[90784]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 22 00:25:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs mount
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluefs mount shared_bdev_used = 4718592
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 00:25:53 np0005531754 podman[92276]: 2025-11-22 05:25:53.498444536 +0000 UTC m=+0.161789531 container init 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: RocksDB version: 7.9.2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Git sha 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DB SUMMARY
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DB Session ID:  2LZ21EPTPTNW4W1U2F4D
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: CURRENT file:  CURRENT
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.error_if_exists: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.create_if_missing: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                     Options.env: 0x55c27afde3f0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                Options.info_log: 0x55c27a01e800
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.statistics: (nil)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.use_fsync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.db_log_dir: 
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.write_buffer_manager: 0x55c27af2e460
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.unordered_write: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.row_cache: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                              Options.wal_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.two_write_queues: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.wal_compression: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.atomic_flush: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_background_jobs: 4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_background_compactions: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_subcompactions: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.max_open_files: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Compression algorithms supported:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZSTD supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kXpressCompression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kZlibCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 podman[92276]: 2025-11-22 05:25:53.508163712 +0000 UTC m=+0.171508737 container start 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 podman[92276]: 2025-11-22 05:25:53.512717645 +0000 UTC m=+0.176062650 container attach 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c27a006430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 00d3a8ab-719a-4a16-94c2-99fe9381ec3c
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153520801, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153525144, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153528038, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153531051, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153534093, "job": 1, "event": "recovery_finished"}
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c27b00e000
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: DB pointer 0x55c27a041a00
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usag
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: _get_class not permitted to load lua
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: _get_class not permitted to load sdk
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: _get_class not permitted to load test_remote_reads
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 load_pgs
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 load_pgs opened 0 pgs
Nov 22 00:25:53 np0005531754 ceph-osd[91881]: osd.2 0 log_to_monitors true
Nov 22 00:25:53 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:25:53.562+0000 7f68b4f02740 -1 osd.2 0 log_to_monitors true
Nov 22 00:25:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 22 00:25:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 00:25:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 00:25:54 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:54 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: OSD bench result of 8424.897601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803] boot
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:54 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:54 np0005531754 ceph-osd[90784]: osd.1 13 state: booting -> active
Nov 22 00:25:54 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]: {
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_id": 1,
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "type": "bluestore"
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    },
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_id": 2,
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "type": "bluestore"
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    },
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_id": 0,
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:        "type": "bluestore"
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]:    }
Nov 22 00:25:54 np0005531754 lucid_bhabha[92293]: }
Nov 22 00:25:54 np0005531754 systemd[1]: libpod-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Deactivated successfully.
Nov 22 00:25:54 np0005531754 podman[92276]: 2025-11-22 05:25:54.509877182 +0000 UTC m=+1.173222177 container died 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:25:54 np0005531754 systemd[1]: libpod-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Consumed 1.010s CPU time.
Nov 22 00:25:54 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f-merged.mount: Deactivated successfully.
Nov 22 00:25:54 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 00:25:54 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 00:25:54 np0005531754 podman[92276]: 2025-11-22 05:25:54.57177403 +0000 UTC m=+1.235119055 container remove 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:54 np0005531754 systemd[1]: libpod-conmon-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Deactivated successfully.
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 done with init, starting boot process
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 start_boot
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 00:25:55 np0005531754 ceph-osd[91881]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803] boot
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] creating main.db for devicehealth
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 00:25:55 np0005531754 podman[92779]: 2025-11-22 05:25:55.814570845 +0000 UTC m=+0.109261069 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:25:55 np0005531754 podman[92779]: 2025-11-22 05:25:55.927726896 +0000 UTC m=+0.222417030 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 00:25:55 np0005531754 ceph-mgr[76134]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 22 00:25:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 00:25:56 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:56 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:56 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:25:57 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:57 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.546725569 +0000 UTC m=+0.047064411 container create be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:25:57 np0005531754 systemd[1]: Started libpod-conmon-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope.
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.518599444 +0000 UTC m=+0.018938276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.646127167 +0000 UTC m=+0.146466019 container init be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.653419927 +0000 UTC m=+0.153758749 container start be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:25:57 np0005531754 vigorous_moore[93067]: 167 167
Nov 22 00:25:57 np0005531754 systemd[1]: libpod-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope: Deactivated successfully.
Nov 22 00:25:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.667641654 +0000 UTC m=+0.167980686 container attach be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.668521791 +0000 UTC m=+0.168860603 container died be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:25:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-06cba4066fbea54b3f9ad622c89ef008469f75a166d525da5745a9acd730c620-merged.mount: Deactivated successfully.
Nov 22 00:25:57 np0005531754 podman[93051]: 2025-11-22 05:25:57.78256162 +0000 UTC m=+0.282900432 container remove be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:25:57 np0005531754 systemd[1]: libpod-conmon-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope: Deactivated successfully.
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mscchl(active, since 74s)
Nov 22 00:25:57 np0005531754 podman[93095]: 2025-11-22 05:25:57.993798777 +0000 UTC m=+0.055833848 container create a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 00:25:58 np0005531754 systemd[1]: Started libpod-conmon-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope.
Nov 22 00:25:58 np0005531754 podman[93095]: 2025-11-22 05:25:57.969979538 +0000 UTC m=+0.032014629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:25:58 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:25:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:25:58 np0005531754 podman[93095]: 2025-11-22 05:25:58.142198117 +0000 UTC m=+0.204233228 container init a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:25:58 np0005531754 podman[93095]: 2025-11-22 05:25:58.153131881 +0000 UTC m=+0.215166982 container start a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:25:58 np0005531754 podman[93095]: 2025-11-22 05:25:58.178419657 +0000 UTC m=+0.240454758 container attach a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:25:58 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:25:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:58 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]: [
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:    {
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "available": false,
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "ceph_device": false,
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "lsm_data": {},
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "lvs": [],
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "path": "/dev/sr0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "rejected_reasons": [
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "Insufficient space (<5GB)",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "Has a FileSystem"
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        ],
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        "sys_api": {
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "actuators": null,
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "device_nodes": "sr0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "devname": "sr0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "human_readable_size": "482.00 KB",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "id_bus": "ata",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "model": "QEMU DVD-ROM",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "nr_requests": "2",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "parent": "/dev/sr0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "partitions": {},
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "path": "/dev/sr0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "removable": "1",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "rev": "2.5+",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "ro": "0",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "rotational": "1",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "sas_address": "",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "sas_device_handle": "",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "scheduler_mode": "mq-deadline",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "sectors": 0,
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "sectorsize": "2048",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "size": 493568.0,
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "support_discard": "2048",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "type": "disk",
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:            "vendor": "QEMU"
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:        }
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]:    }
Nov 22 00:25:59 np0005531754 unruffled_bhaskara[93113]: ]
Nov 22 00:25:59 np0005531754 systemd[1]: libpod-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Deactivated successfully.
Nov 22 00:25:59 np0005531754 podman[93095]: 2025-11-22 05:25:59.603372743 +0000 UTC m=+1.665407864 container died a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:25:59 np0005531754 systemd[1]: libpod-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Consumed 1.483s CPU time.
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 00:25:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c-merged.mount: Deactivated successfully.
Nov 22 00:25:59 np0005531754 podman[93095]: 2025-11-22 05:25:59.713144068 +0000 UTC m=+1.775179169 container remove a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:25:59 np0005531754 systemd[1]: libpod-conmon-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Deactivated successfully.
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 978a8c5b-031a-422a-9d3e-ea4133a49b4b does not exist
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 65848e5d-ba47-4ec3-af77-b3fe8501ef1a does not exist
Nov 22 00:25:59 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ace50b9c-d33f-430f-9735-6f9756f407f4 does not exist
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:25:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.090 iops: 5143.141 elapsed_sec: 0.583
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [WRN] : OSD bench result of 5143.140774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 0 waiting for initial osdmap
Nov 22 00:26:00 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:26:00.253+0000 7f68b1699640 -1 osd.2 0 waiting for initial osdmap
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:26:00 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:26:00.293+0000 7f68ac4aa640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 set_numa_affinity not setting numa affinity
Nov 22 00:26:00 np0005531754 ceph-osd[91881]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 22 00:26:00 np0005531754 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:26:00 np0005531754 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.570189396 +0000 UTC m=+0.068995632 container create ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:00 np0005531754 systemd[1]: Started libpod-conmon-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope.
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.538275111 +0000 UTC m=+0.037081377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.666978921 +0000 UTC m=+0.165785197 container init ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.678732121 +0000 UTC m=+0.177538357 container start ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:26:00 np0005531754 pensive_solomon[94911]: 167 167
Nov 22 00:26:00 np0005531754 systemd[1]: libpod-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope: Deactivated successfully.
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.686098053 +0000 UTC m=+0.184904289 container attach ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.687143715 +0000 UTC m=+0.185949951 container died ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:26:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-19e4efbbd7b6ec0e085cd99d0309bba92ff4727a1271084bda39fa2c0dd535f5-merged.mount: Deactivated successfully.
Nov 22 00:26:00 np0005531754 podman[94894]: 2025-11-22 05:26:00.757135758 +0000 UTC m=+0.255941994 container remove ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:00 np0005531754 systemd[1]: libpod-conmon-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope: Deactivated successfully.
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: OSD bench result of 5143.140774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 00:26:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 22 00:26:01 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084] boot
Nov 22 00:26:01 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 22 00:26:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 00:26:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 00:26:01 np0005531754 ceph-osd[91881]: osd.2 16 state: booting -> active
Nov 22 00:26:01 np0005531754 podman[94935]: 2025-11-22 05:26:01.018045278 +0000 UTC m=+0.085721358 container create ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:26:01 np0005531754 podman[94935]: 2025-11-22 05:26:00.97839014 +0000 UTC m=+0.046066310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:01 np0005531754 systemd[1]: Started libpod-conmon-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope.
Nov 22 00:26:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:01 np0005531754 podman[94935]: 2025-11-22 05:26:01.12968336 +0000 UTC m=+0.197359450 container init ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:01 np0005531754 podman[94935]: 2025-11-22 05:26:01.140982367 +0000 UTC m=+0.208658477 container start ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:26:01 np0005531754 podman[94935]: 2025-11-22 05:26:01.150714612 +0000 UTC m=+0.218390732 container attach ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 00:26:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 22 00:26:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 22 00:26:02 np0005531754 ceph-mon[75840]: osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084] boot
Nov 22 00:26:02 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 22 00:26:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:02 np0005531754 hungry_torvalds[94951]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:26:02 np0005531754 hungry_torvalds[94951]: --> relative data size: 1.0
Nov 22 00:26:02 np0005531754 hungry_torvalds[94951]: --> All data devices are unavailable
Nov 22 00:26:02 np0005531754 systemd[1]: libpod-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope: Deactivated successfully.
Nov 22 00:26:02 np0005531754 podman[94935]: 2025-11-22 05:26:02.164298146 +0000 UTC m=+1.231974216 container died ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:26:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d-merged.mount: Deactivated successfully.
Nov 22 00:26:02 np0005531754 podman[94935]: 2025-11-22 05:26:02.223629693 +0000 UTC m=+1.291305763 container remove ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:26:02 np0005531754 systemd[1]: libpod-conmon-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope: Deactivated successfully.
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.058035599 +0000 UTC m=+0.066011068 container create c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:26:03 np0005531754 systemd[1]: Started libpod-conmon-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope.
Nov 22 00:26:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.031794943 +0000 UTC m=+0.039770492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.143951502 +0000 UTC m=+0.151927061 container init c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.156980191 +0000 UTC m=+0.164955690 container start c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.162116723 +0000 UTC m=+0.170092292 container attach c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:26:03 np0005531754 angry_booth[95147]: 167 167
Nov 22 00:26:03 np0005531754 systemd[1]: libpod-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope: Deactivated successfully.
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.16645128 +0000 UTC m=+0.174426829 container died c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:03 np0005531754 systemd[1]: var-lib-containers-storage-overlay-facabab25a0a5e3bb88efd09e95d1f72d9d46c08207272643c664574aad0b4d3-merged.mount: Deactivated successfully.
Nov 22 00:26:03 np0005531754 podman[95131]: 2025-11-22 05:26:03.218861739 +0000 UTC m=+0.226837278 container remove c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:26:03 np0005531754 systemd[1]: libpod-conmon-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope: Deactivated successfully.
Nov 22 00:26:03 np0005531754 podman[95171]: 2025-11-22 05:26:03.431985795 +0000 UTC m=+0.058273125 container create 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:26:03 np0005531754 systemd[1]: Started libpod-conmon-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope.
Nov 22 00:26:03 np0005531754 podman[95171]: 2025-11-22 05:26:03.410686685 +0000 UTC m=+0.036974045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:03 np0005531754 podman[95171]: 2025-11-22 05:26:03.531761465 +0000 UTC m=+0.158048785 container init 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:26:03 np0005531754 podman[95171]: 2025-11-22 05:26:03.547317614 +0000 UTC m=+0.173604934 container start 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:26:03 np0005531754 podman[95171]: 2025-11-22 05:26:03.551627539 +0000 UTC m=+0.177914889 container attach 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:26:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 00:26:04 np0005531754 goofy_pike[95187]: {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    "0": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "devices": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "/dev/loop3"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            ],
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_name": "ceph_lv0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_size": "21470642176",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "name": "ceph_lv0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "tags": {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.crush_device_class": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.encrypted": "0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_id": "0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.vdo": "0"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            },
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "vg_name": "ceph_vg0"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        }
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    ],
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    "1": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "devices": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "/dev/loop4"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            ],
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_name": "ceph_lv1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_size": "21470642176",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "name": "ceph_lv1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "tags": {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.crush_device_class": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.encrypted": "0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_id": "1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.vdo": "0"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            },
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "vg_name": "ceph_vg1"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        }
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    ],
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    "2": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "devices": [
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "/dev/loop5"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            ],
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_name": "ceph_lv2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_size": "21470642176",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "name": "ceph_lv2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "tags": {
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.crush_device_class": "",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.encrypted": "0",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osd_id": "2",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:                "ceph.vdo": "0"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            },
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "type": "block",
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:            "vg_name": "ceph_vg2"
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:        }
Nov 22 00:26:04 np0005531754 goofy_pike[95187]:    ]
Nov 22 00:26:04 np0005531754 goofy_pike[95187]: }
Nov 22 00:26:04 np0005531754 systemd[1]: libpod-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope: Deactivated successfully.
Nov 22 00:26:04 np0005531754 podman[95196]: 2025-11-22 05:26:04.379432247 +0000 UTC m=+0.023605973 container died 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:26:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f-merged.mount: Deactivated successfully.
Nov 22 00:26:04 np0005531754 podman[95196]: 2025-11-22 05:26:04.433849419 +0000 UTC m=+0.078023115 container remove 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:04 np0005531754 systemd[1]: libpod-conmon-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope: Deactivated successfully.
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.14160556 +0000 UTC m=+0.049775768 container create 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:26:05 np0005531754 systemd[1]: Started libpod-conmon-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope.
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.113805175 +0000 UTC m=+0.021975443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.23345921 +0000 UTC m=+0.141629408 container init 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.245071066 +0000 UTC m=+0.153241284 container start 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:05 np0005531754 angry_goldstine[95368]: 167 167
Nov 22 00:26:05 np0005531754 systemd[1]: libpod-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope: Deactivated successfully.
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.251037193 +0000 UTC m=+0.159207471 container attach 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.25186165 +0000 UTC m=+0.160031858 container died 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:26:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9a93b69e67ed9b1d214c0d2bb6b24f7d5ce479f3bbe7ac96d479521fdb636acb-merged.mount: Deactivated successfully.
Nov 22 00:26:05 np0005531754 podman[95351]: 2025-11-22 05:26:05.293056865 +0000 UTC m=+0.201227063 container remove 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:26:05 np0005531754 systemd[1]: libpod-conmon-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope: Deactivated successfully.
Nov 22 00:26:05 np0005531754 podman[95391]: 2025-11-22 05:26:05.527595916 +0000 UTC m=+0.048153997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:05 np0005531754 podman[95391]: 2025-11-22 05:26:05.779103919 +0000 UTC m=+0.299661960 container create c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:26:05 np0005531754 systemd[1]: Started libpod-conmon-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope.
Nov 22 00:26:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:05 np0005531754 podman[95391]: 2025-11-22 05:26:05.897176744 +0000 UTC m=+0.417734755 container init c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:26:05 np0005531754 podman[95391]: 2025-11-22 05:26:05.904441204 +0000 UTC m=+0.424999215 container start c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:26:05 np0005531754 podman[95391]: 2025-11-22 05:26:05.910896396 +0000 UTC m=+0.431454387 container attach c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]: {
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_id": 1,
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "type": "bluestore"
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    },
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_id": 2,
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "type": "bluestore"
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    },
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_id": 0,
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:        "type": "bluestore"
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]:    }
Nov 22 00:26:06 np0005531754 hopeful_cori[95407]: }
Nov 22 00:26:06 np0005531754 systemd[1]: libpod-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Deactivated successfully.
Nov 22 00:26:06 np0005531754 podman[95391]: 2025-11-22 05:26:06.968943479 +0000 UTC m=+1.489501480 container died c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:06 np0005531754 systemd[1]: libpod-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Consumed 1.069s CPU time.
Nov 22 00:26:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78-merged.mount: Deactivated successfully.
Nov 22 00:26:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:07 np0005531754 podman[95391]: 2025-11-22 05:26:07.037015211 +0000 UTC m=+1.557573232 container remove c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:07 np0005531754 systemd[1]: libpod-conmon-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Deactivated successfully.
Nov 22 00:26:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:08 np0005531754 podman[95677]: 2025-11-22 05:26:08.035122967 +0000 UTC m=+0.049454467 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:08 np0005531754 podman[95677]: 2025-11-22 05:26:08.133910616 +0000 UTC m=+0.148242096 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e567e22d-c5d8-47b3-98de-df7b2d5b106c does not exist
Nov 22 00:26:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ca7fb3da-7c92-428b-afd5-6bef58d49637 does not exist
Nov 22 00:26:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 35b26a0d-eb33-4e7d-8de4-7099bd9052ba does not exist
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.013036114 +0000 UTC m=+0.035061424 container create 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:10 np0005531754 systemd[1]: Started libpod-conmon-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope.
Nov 22 00:26:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:09.998081883 +0000 UTC m=+0.020107203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.096042246 +0000 UTC m=+0.118067586 container init 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.102569182 +0000 UTC m=+0.124594502 container start 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.105988809 +0000 UTC m=+0.128014219 container attach 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:10 np0005531754 ecstatic_feynman[96084]: 167 167
Nov 22 00:26:10 np0005531754 systemd[1]: libpod-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope: Deactivated successfully.
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.108032373 +0000 UTC m=+0.130057693 container died 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:26:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-30de82f8eeb63aa7dab4ea686330c3ace5f0dc962071fe34efd793fb8eeacd51-merged.mount: Deactivated successfully.
Nov 22 00:26:10 np0005531754 podman[96067]: 2025-11-22 05:26:10.144974385 +0000 UTC m=+0.166999685 container remove 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:10 np0005531754 systemd[1]: libpod-conmon-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope: Deactivated successfully.
Nov 22 00:26:10 np0005531754 podman[96106]: 2025-11-22 05:26:10.336774511 +0000 UTC m=+0.072013887 container create 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:26:10 np0005531754 systemd[1]: Started libpod-conmon-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope.
Nov 22 00:26:10 np0005531754 podman[96106]: 2025-11-22 05:26:10.305498477 +0000 UTC m=+0.040737933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:10 np0005531754 podman[96106]: 2025-11-22 05:26:10.437725807 +0000 UTC m=+0.172965293 container init 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:10 np0005531754 podman[96106]: 2025-11-22 05:26:10.449717245 +0000 UTC m=+0.184956631 container start 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:10 np0005531754 podman[96106]: 2025-11-22 05:26:10.45400975 +0000 UTC m=+0.189249216 container attach 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:11 np0005531754 musing_galois[96122]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:26:11 np0005531754 musing_galois[96122]: --> relative data size: 1.0
Nov 22 00:26:11 np0005531754 musing_galois[96122]: --> All data devices are unavailable
Nov 22 00:26:11 np0005531754 systemd[1]: libpod-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Deactivated successfully.
Nov 22 00:26:11 np0005531754 systemd[1]: libpod-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Consumed 1.077s CPU time.
Nov 22 00:26:11 np0005531754 podman[96106]: 2025-11-22 05:26:11.576220651 +0000 UTC m=+1.311460027 container died 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8-merged.mount: Deactivated successfully.
Nov 22 00:26:12 np0005531754 podman[96106]: 2025-11-22 05:26:12.491236253 +0000 UTC m=+2.226475629 container remove 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:26:12 np0005531754 systemd[1]: libpod-conmon-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Deactivated successfully.
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.209000939 +0000 UTC m=+0.046221136 container create 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 00:26:13 np0005531754 systemd[1]: Started libpod-conmon-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope.
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.189707501 +0000 UTC m=+0.026927698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:13 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.30279099 +0000 UTC m=+0.140011187 container init 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.314183598 +0000 UTC m=+0.151403775 container start 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.317796431 +0000 UTC m=+0.155016648 container attach 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:26:13 np0005531754 naughty_noether[96320]: 167 167
Nov 22 00:26:13 np0005531754 systemd[1]: libpod-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope: Deactivated successfully.
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.322730657 +0000 UTC m=+0.159950874 container died 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7a22abe11b9d0008be9ede1805424b7471c5f57ddc00e6a34f947d5abe6d6be1-merged.mount: Deactivated successfully.
Nov 22 00:26:13 np0005531754 podman[96305]: 2025-11-22 05:26:13.376573361 +0000 UTC m=+0.213793548 container remove 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:13 np0005531754 systemd[1]: libpod-conmon-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope: Deactivated successfully.
Nov 22 00:26:13 np0005531754 podman[96344]: 2025-11-22 05:26:13.603848643 +0000 UTC m=+0.067398422 container create 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:13 np0005531754 systemd[1]: Started libpod-conmon-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope.
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:13 np0005531754 podman[96344]: 2025-11-22 05:26:13.575194491 +0000 UTC m=+0.038744320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:13 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:13 np0005531754 podman[96344]: 2025-11-22 05:26:13.740815522 +0000 UTC m=+0.204365291 container init 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:26:13 np0005531754 podman[96344]: 2025-11-22 05:26:13.753063388 +0000 UTC m=+0.216613167 container start 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:13 np0005531754 podman[96344]: 2025-11-22 05:26:13.778508478 +0000 UTC m=+0.242058237 container attach 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]: {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    "0": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "devices": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "/dev/loop3"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            ],
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_name": "ceph_lv0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_size": "21470642176",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "name": "ceph_lv0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "tags": {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.crush_device_class": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.encrypted": "0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_id": "0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.vdo": "0"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            },
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "vg_name": "ceph_vg0"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        }
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    ],
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    "1": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "devices": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "/dev/loop4"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            ],
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_name": "ceph_lv1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_size": "21470642176",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "name": "ceph_lv1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "tags": {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.crush_device_class": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.encrypted": "0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_id": "1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.vdo": "0"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            },
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "vg_name": "ceph_vg1"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        }
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    ],
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    "2": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "devices": [
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "/dev/loop5"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            ],
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_name": "ceph_lv2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_size": "21470642176",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "name": "ceph_lv2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "tags": {
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.crush_device_class": "",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.encrypted": "0",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osd_id": "2",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:                "ceph.vdo": "0"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            },
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "type": "block",
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:            "vg_name": "ceph_vg2"
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:        }
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]:    ]
Nov 22 00:26:14 np0005531754 laughing_chaum[96360]: }
Nov 22 00:26:14 np0005531754 systemd[1]: libpod-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope: Deactivated successfully.
Nov 22 00:26:14 np0005531754 podman[96344]: 2025-11-22 05:26:14.589969982 +0000 UTC m=+1.053519761 container died 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a-merged.mount: Deactivated successfully.
Nov 22 00:26:14 np0005531754 podman[96344]: 2025-11-22 05:26:14.679173099 +0000 UTC m=+1.142722888 container remove 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:26:14 np0005531754 systemd[1]: libpod-conmon-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope: Deactivated successfully.
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.406967029 +0000 UTC m=+0.054261137 container create b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:15 np0005531754 systemd[1]: Started libpod-conmon-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope.
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.382003535 +0000 UTC m=+0.029297683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.499347546 +0000 UTC m=+0.146641684 container init b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.50646002 +0000 UTC m=+0.153754148 container start b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.510111665 +0000 UTC m=+0.157405773 container attach b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:26:15 np0005531754 great_gagarin[96540]: 167 167
Nov 22 00:26:15 np0005531754 systemd[1]: libpod-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope: Deactivated successfully.
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.511850889 +0000 UTC m=+0.159144998 container died b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-786a7cc9bc52571cdc8d2b0422b9fa0824e54db84eedd2e6d2ae775353f4478a-merged.mount: Deactivated successfully.
Nov 22 00:26:15 np0005531754 podman[96523]: 2025-11-22 05:26:15.562762822 +0000 UTC m=+0.210056900 container remove b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:26:15 np0005531754 systemd[1]: libpod-conmon-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope: Deactivated successfully.
Nov 22 00:26:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:15 np0005531754 podman[96563]: 2025-11-22 05:26:15.755525507 +0000 UTC m=+0.056344474 container create e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:26:15 np0005531754 systemd[1]: Started libpod-conmon-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope.
Nov 22 00:26:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:15 np0005531754 podman[96563]: 2025-11-22 05:26:15.726281137 +0000 UTC m=+0.027100094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:15 np0005531754 podman[96563]: 2025-11-22 05:26:15.852624353 +0000 UTC m=+0.153443360 container init e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:15 np0005531754 podman[96563]: 2025-11-22 05:26:15.866539721 +0000 UTC m=+0.167358688 container start e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:26:15 np0005531754 podman[96563]: 2025-11-22 05:26:15.870917959 +0000 UTC m=+0.171736936 container attach e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 00:26:16 np0005531754 confident_nash[96579]: {
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_id": 1,
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "type": "bluestore"
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    },
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_id": 2,
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "type": "bluestore"
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    },
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_id": 0,
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:16 np0005531754 confident_nash[96579]:        "type": "bluestore"
Nov 22 00:26:16 np0005531754 confident_nash[96579]:    }
Nov 22 00:26:16 np0005531754 confident_nash[96579]: }
Nov 22 00:26:16 np0005531754 systemd[1]: libpod-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope: Deactivated successfully.
Nov 22 00:26:16 np0005531754 podman[96563]: 2025-11-22 05:26:16.809673407 +0000 UTC m=+1.110492344 container died e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:26:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b-merged.mount: Deactivated successfully.
Nov 22 00:26:16 np0005531754 podman[96563]: 2025-11-22 05:26:16.8638001 +0000 UTC m=+1.164619027 container remove e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:26:16 np0005531754 systemd[1]: libpod-conmon-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope: Deactivated successfully.
Nov 22 00:26:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:18 np0005531754 python3[96697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:18 np0005531754 podman[96699]: 2025-11-22 05:26:18.579453305 +0000 UTC m=+0.049394775 container create f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:18 np0005531754 systemd[1]: Started libpod-conmon-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope.
Nov 22 00:26:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:18 np0005531754 podman[96699]: 2025-11-22 05:26:18.554430848 +0000 UTC m=+0.024372308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:18 np0005531754 podman[96699]: 2025-11-22 05:26:18.672960408 +0000 UTC m=+0.142901868 container init f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 00:26:18 np0005531754 podman[96699]: 2025-11-22 05:26:18.680253817 +0000 UTC m=+0.150195167 container start f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:18 np0005531754 podman[96699]: 2025-11-22 05:26:18.684838292 +0000 UTC m=+0.154779672 container attach f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:26:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 00:26:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306877251' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 00:26:19 np0005531754 competent_keller[96716]: 
Nov 22 00:26:19 np0005531754 competent_keller[96716]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":142,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502763520,"bytes_avail":63909163008,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T05:25:45.657644+0000","services":{}},"progress_events":{}}
Nov 22 00:26:19 np0005531754 systemd[1]: libpod-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope: Deactivated successfully.
Nov 22 00:26:19 np0005531754 podman[96699]: 2025-11-22 05:26:19.296288871 +0000 UTC m=+0.766230231 container died f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1-merged.mount: Deactivated successfully.
Nov 22 00:26:19 np0005531754 podman[96699]: 2025-11-22 05:26:19.418001731 +0000 UTC m=+0.887943091 container remove f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:19 np0005531754 systemd[1]: libpod-conmon-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope: Deactivated successfully.
Nov 22 00:26:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:19 np0005531754 python3[96779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:19 np0005531754 podman[96780]: 2025-11-22 05:26:19.966647685 +0000 UTC m=+0.049560220 container create 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:19 np0005531754 systemd[1]: Started libpod-conmon-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope.
Nov 22 00:26:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:20 np0005531754 podman[96780]: 2025-11-22 05:26:19.944007533 +0000 UTC m=+0.026920088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:20 np0005531754 podman[96780]: 2025-11-22 05:26:20.043235215 +0000 UTC m=+0.126147760 container init 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:26:20 np0005531754 podman[96780]: 2025-11-22 05:26:20.04975568 +0000 UTC m=+0.132668225 container start 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:20 np0005531754 podman[96780]: 2025-11-22 05:26:20.054755737 +0000 UTC m=+0.137668272 container attach 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 22 00:26:20 np0005531754 tender_leakey[96795]: pool 'vms' created
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 22 00:26:20 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:20 np0005531754 systemd[1]: libpod-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope: Deactivated successfully.
Nov 22 00:26:20 np0005531754 podman[96780]: 2025-11-22 05:26:20.993965501 +0000 UTC m=+1.076878046 container died 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:26:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b-merged.mount: Deactivated successfully.
Nov 22 00:26:21 np0005531754 podman[96780]: 2025-11-22 05:26:21.038467721 +0000 UTC m=+1.121380246 container remove 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:26:21 np0005531754 systemd[1]: libpod-conmon-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope: Deactivated successfully.
Nov 22 00:26:21 np0005531754 python3[96859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:21 np0005531754 podman[96860]: 2025-11-22 05:26:21.447411119 +0000 UTC m=+0.063974814 container create 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:21 np0005531754 systemd[1]: Started libpod-conmon-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope.
Nov 22 00:26:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:21 np0005531754 podman[96860]: 2025-11-22 05:26:21.508558002 +0000 UTC m=+0.125121727 container init 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:21 np0005531754 podman[96860]: 2025-11-22 05:26:21.513700094 +0000 UTC m=+0.130263789 container start 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:26:21 np0005531754 podman[96860]: 2025-11-22 05:26:21.420311426 +0000 UTC m=+0.036875211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:21 np0005531754 podman[96860]: 2025-11-22 05:26:21.517093761 +0000 UTC m=+0.133657486 container attach 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:26:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 22 00:26:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 22 00:26:21 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 22 00:26:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 22 00:26:22 np0005531754 loving_babbage[96875]: pool 'volumes' created
Nov 22 00:26:22 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 22 00:26:23 np0005531754 systemd[1]: libpod-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope: Deactivated successfully.
Nov 22 00:26:23 np0005531754 podman[96860]: 2025-11-22 05:26:23.024705519 +0000 UTC m=+1.641269264 container died 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:26:23 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060-merged.mount: Deactivated successfully.
Nov 22 00:26:23 np0005531754 podman[96860]: 2025-11-22 05:26:23.071128539 +0000 UTC m=+1.687692244 container remove 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:26:23 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:23 np0005531754 systemd[1]: libpod-conmon-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope: Deactivated successfully.
Nov 22 00:26:23 np0005531754 python3[96939]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:23 np0005531754 podman[96940]: 2025-11-22 05:26:23.497641041 +0000 UTC m=+0.042696765 container create b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:23 np0005531754 systemd[1]: Started libpod-conmon-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope.
Nov 22 00:26:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:23 np0005531754 podman[96940]: 2025-11-22 05:26:23.479337275 +0000 UTC m=+0.024393019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:23 np0005531754 podman[96940]: 2025-11-22 05:26:23.584465273 +0000 UTC m=+0.129521027 container init b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:26:23 np0005531754 podman[96940]: 2025-11-22 05:26:23.594763997 +0000 UTC m=+0.139819721 container start b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:26:23 np0005531754 podman[96940]: 2025-11-22 05:26:23.598382491 +0000 UTC m=+0.143438215 container attach b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 00:26:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v61: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 22 00:26:24 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 22 00:26:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 22 00:26:24 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 22 00:26:25 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 22 00:26:25 np0005531754 elated_faraday[96955]: pool 'backups' created
Nov 22 00:26:25 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 22 00:26:25 np0005531754 systemd[1]: libpod-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope: Deactivated successfully.
Nov 22 00:26:25 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:25 np0005531754 podman[96982]: 2025-11-22 05:26:25.097657648 +0000 UTC m=+0.032039230 container died b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:26:25 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172-merged.mount: Deactivated successfully.
Nov 22 00:26:25 np0005531754 podman[96982]: 2025-11-22 05:26:25.155512468 +0000 UTC m=+0.089894080 container remove b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 00:26:25 np0005531754 systemd[1]: libpod-conmon-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope: Deactivated successfully.
Nov 22 00:26:25 np0005531754 python3[97022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:25 np0005531754 podman[97023]: 2025-11-22 05:26:25.605056573 +0000 UTC m=+0.062278260 container create b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:25 np0005531754 systemd[1]: Started libpod-conmon-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope.
Nov 22 00:26:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v64: 4 pgs: 2 active+clean, 1 unknown, 1 creating+peering; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:25 np0005531754 podman[97023]: 2025-11-22 05:26:25.577509796 +0000 UTC m=+0.034731563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:25 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:25 np0005531754 podman[97023]: 2025-11-22 05:26:25.711933896 +0000 UTC m=+0.169155643 container init b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:25 np0005531754 podman[97023]: 2025-11-22 05:26:25.723764169 +0000 UTC m=+0.180985886 container start b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:26:25 np0005531754 podman[97023]: 2025-11-22 05:26:25.728212528 +0000 UTC m=+0.185434285 container attach b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 22 00:26:26 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 22 00:26:27 np0005531754 reverent_vaughan[97039]: pool 'images' created
Nov 22 00:26:27 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 22 00:26:27 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:27 np0005531754 systemd[1]: libpod-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope: Deactivated successfully.
Nov 22 00:26:27 np0005531754 podman[97023]: 2025-11-22 05:26:27.087139679 +0000 UTC m=+1.544361366 container died b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:27 np0005531754 systemd[1]: var-lib-containers-storage-overlay-86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77-merged.mount: Deactivated successfully.
Nov 22 00:26:27 np0005531754 podman[97023]: 2025-11-22 05:26:27.144603677 +0000 UTC m=+1.601825364 container remove b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:26:27 np0005531754 systemd[1]: libpod-conmon-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope: Deactivated successfully.
Nov 22 00:26:27 np0005531754 python3[97103]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:27 np0005531754 podman[97104]: 2025-11-22 05:26:27.545544013 +0000 UTC m=+0.048784287 container create cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:27 np0005531754 systemd[1]: Started libpod-conmon-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope.
Nov 22 00:26:27 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:27 np0005531754 podman[97104]: 2025-11-22 05:26:27.522348852 +0000 UTC m=+0.025589176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:27 np0005531754 podman[97104]: 2025-11-22 05:26:27.62334937 +0000 UTC m=+0.126589724 container init cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:27 np0005531754 podman[97104]: 2025-11-22 05:26:27.634557303 +0000 UTC m=+0.137797607 container start cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:26:27 np0005531754 podman[97104]: 2025-11-22 05:26:27.63859483 +0000 UTC m=+0.141835124 container attach cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:28 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 22 00:26:29 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 22 00:26:29 np0005531754 reverent_gould[97119]: pool 'cephfs.cephfs.meta' created
Nov 22 00:26:29 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 22 00:26:29 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:29 np0005531754 systemd[1]: libpod-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope: Deactivated successfully.
Nov 22 00:26:29 np0005531754 podman[97104]: 2025-11-22 05:26:29.11862114 +0000 UTC m=+1.621861464 container died cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a-merged.mount: Deactivated successfully.
Nov 22 00:26:29 np0005531754 podman[97104]: 2025-11-22 05:26:29.182885393 +0000 UTC m=+1.686125667 container remove cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:26:29 np0005531754 systemd[1]: libpod-conmon-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope: Deactivated successfully.
Nov 22 00:26:29 np0005531754 python3[97185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:29 np0005531754 podman[97186]: 2025-11-22 05:26:29.602070783 +0000 UTC m=+0.076438956 container create 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:29 np0005531754 systemd[1]: Started libpod-conmon-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope.
Nov 22 00:26:29 np0005531754 podman[97186]: 2025-11-22 05:26:29.573019929 +0000 UTC m=+0.047388152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 3 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:29 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:29 np0005531754 podman[97186]: 2025-11-22 05:26:29.699144648 +0000 UTC m=+0.173512811 container init 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:29 np0005531754 podman[97186]: 2025-11-22 05:26:29.71001244 +0000 UTC m=+0.184380573 container start 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:26:29 np0005531754 podman[97186]: 2025-11-22 05:26:29.713881072 +0000 UTC m=+0.188249245 container attach 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 22 00:26:30 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 00:26:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 22 00:26:31 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 22 00:26:31 np0005531754 dreamy_chandrasekhar[97202]: pool 'cephfs.cephfs.data' created
Nov 22 00:26:31 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 22 00:26:31 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 00:26:31 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:31 np0005531754 systemd[1]: libpod-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope: Deactivated successfully.
Nov 22 00:26:31 np0005531754 podman[97186]: 2025-11-22 05:26:31.14903601 +0000 UTC m=+1.623404143 container died 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:26:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac-merged.mount: Deactivated successfully.
Nov 22 00:26:31 np0005531754 podman[97186]: 2025-11-22 05:26:31.189039558 +0000 UTC m=+1.663407681 container remove 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:26:31 np0005531754 systemd[1]: libpod-conmon-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope: Deactivated successfully.
Nov 22 00:26:31 np0005531754 python3[97266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:31 np0005531754 podman[97267]: 2025-11-22 05:26:31.592967049 +0000 UTC m=+0.049349845 container create b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:31 np0005531754 systemd[1]: Started libpod-conmon-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope.
Nov 22 00:26:31 np0005531754 podman[97267]: 2025-11-22 05:26:31.571261735 +0000 UTC m=+0.027644531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:31 np0005531754 podman[97267]: 2025-11-22 05:26:31.69345108 +0000 UTC m=+0.149833866 container init b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:31 np0005531754 podman[97267]: 2025-11-22 05:26:31.700528843 +0000 UTC m=+0.156911629 container start b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:26:31 np0005531754 podman[97267]: 2025-11-22 05:26:31.703716144 +0000 UTC m=+0.160098960 container attach b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 22 00:26:32 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 22 00:26:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 00:26:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 22 00:26:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 00:26:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 22 00:26:33 np0005531754 unruffled_proskuriakova[97282]: enabled application 'rbd' on pool 'vms'
Nov 22 00:26:33 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 22 00:26:33 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 00:26:33 np0005531754 systemd[1]: libpod-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope: Deactivated successfully.
Nov 22 00:26:33 np0005531754 podman[97267]: 2025-11-22 05:26:33.181147002 +0000 UTC m=+1.637529788 container died b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a-merged.mount: Deactivated successfully.
Nov 22 00:26:33 np0005531754 podman[97267]: 2025-11-22 05:26:33.238193748 +0000 UTC m=+1.694576544 container remove b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:33 np0005531754 systemd[1]: libpod-conmon-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope: Deactivated successfully.
Nov 22 00:26:33 np0005531754 python3[97344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:33 np0005531754 podman[97345]: 2025-11-22 05:26:33.599318901 +0000 UTC m=+0.056640214 container create 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:33 np0005531754 systemd[1]: Started libpod-conmon-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope.
Nov 22 00:26:33 np0005531754 podman[97345]: 2025-11-22 05:26:33.569051258 +0000 UTC m=+0.026372621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:33 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:33 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:33 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:33 np0005531754 podman[97345]: 2025-11-22 05:26:33.701431044 +0000 UTC m=+0.158752327 container init 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:26:33 np0005531754 podman[97345]: 2025-11-22 05:26:33.712419499 +0000 UTC m=+0.169740822 container start 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:33 np0005531754 podman[97345]: 2025-11-22 05:26:33.716820637 +0000 UTC m=+0.174141950 container attach 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:26:34 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 00:26:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 22 00:26:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 00:26:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 22 00:26:35 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 00:26:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 00:26:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 22 00:26:35 np0005531754 elastic_herschel[97360]: enabled application 'rbd' on pool 'volumes'
Nov 22 00:26:35 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 22 00:26:35 np0005531754 systemd[1]: libpod-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope: Deactivated successfully.
Nov 22 00:26:35 np0005531754 podman[97345]: 2025-11-22 05:26:35.219446319 +0000 UTC m=+1.676767682 container died 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:26:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433-merged.mount: Deactivated successfully.
Nov 22 00:26:35 np0005531754 podman[97345]: 2025-11-22 05:26:35.263520522 +0000 UTC m=+1.720841805 container remove 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:26:35 np0005531754 systemd[1]: libpod-conmon-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope: Deactivated successfully.
Nov 22 00:26:35 np0005531754 python3[97422]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:35 np0005531754 podman[97423]: 2025-11-22 05:26:35.685042905 +0000 UTC m=+0.075251808 container create 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:35 np0005531754 systemd[1]: Started libpod-conmon-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope.
Nov 22 00:26:35 np0005531754 podman[97423]: 2025-11-22 05:26:35.652220136 +0000 UTC m=+0.042429089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:35 np0005531754 podman[97423]: 2025-11-22 05:26:35.772900386 +0000 UTC m=+0.163109309 container init 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:35 np0005531754 podman[97423]: 2025-11-22 05:26:35.778076713 +0000 UTC m=+0.168285576 container start 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:26:35 np0005531754 podman[97423]: 2025-11-22 05:26:35.78151821 +0000 UTC m=+0.171727123 container attach 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:26:36 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 00:26:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 22 00:26:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 22 00:26:37 np0005531754 thirsty_hermann[97438]: enabled application 'rbd' on pool 'backups'
Nov 22 00:26:37 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 22 00:26:37 np0005531754 systemd[1]: libpod-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope: Deactivated successfully.
Nov 22 00:26:37 np0005531754 podman[97423]: 2025-11-22 05:26:37.251901161 +0000 UTC m=+1.642110084 container died 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37-merged.mount: Deactivated successfully.
Nov 22 00:26:37 np0005531754 podman[97423]: 2025-11-22 05:26:37.306632824 +0000 UTC m=+1.696841687 container remove 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:37 np0005531754 systemd[1]: libpod-conmon-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope: Deactivated successfully.
Nov 22 00:26:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:37 np0005531754 python3[97500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:37 np0005531754 podman[97501]: 2025-11-22 05:26:37.776387146 +0000 UTC m=+0.067195126 container create 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:37 np0005531754 systemd[1]: Started libpod-conmon-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope.
Nov 22 00:26:37 np0005531754 podman[97501]: 2025-11-22 05:26:37.748858025 +0000 UTC m=+0.039666085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:37 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:37 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:37 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:37 np0005531754 podman[97501]: 2025-11-22 05:26:37.887417349 +0000 UTC m=+0.178225389 container init 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:26:37 np0005531754 podman[97501]: 2025-11-22 05:26:37.897708911 +0000 UTC m=+0.188516911 container start 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:37 np0005531754 podman[97501]: 2025-11-22 05:26:37.901900096 +0000 UTC m=+0.192708096 container attach 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:26:38 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 00:26:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 22 00:26:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 00:26:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 22 00:26:39 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 00:26:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 00:26:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 22 00:26:39 np0005531754 dreamy_lehmann[97516]: enabled application 'rbd' on pool 'images'
Nov 22 00:26:39 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 22 00:26:39 np0005531754 systemd[1]: libpod-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope: Deactivated successfully.
Nov 22 00:26:39 np0005531754 podman[97501]: 2025-11-22 05:26:39.275681258 +0000 UTC m=+1.566489258 container died 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:26:39 np0005531754 systemd[1]: var-lib-containers-storage-overlay-89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8-merged.mount: Deactivated successfully.
Nov 22 00:26:39 np0005531754 podman[97501]: 2025-11-22 05:26:39.332341715 +0000 UTC m=+1.623149675 container remove 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:39 np0005531754 systemd[1]: libpod-conmon-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope: Deactivated successfully.
Nov 22 00:26:39 np0005531754 python3[97577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:39 np0005531754 podman[97578]: 2025-11-22 05:26:39.676711079 +0000 UTC m=+0.036201848 container create 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:39 np0005531754 systemd[1]: Started libpod-conmon-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope.
Nov 22 00:26:39 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:39 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:39 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:39 np0005531754 podman[97578]: 2025-11-22 05:26:39.744008246 +0000 UTC m=+0.103499005 container init 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:39 np0005531754 podman[97578]: 2025-11-22 05:26:39.750904181 +0000 UTC m=+0.110394960 container start 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:39 np0005531754 podman[97578]: 2025-11-22 05:26:39.658225152 +0000 UTC m=+0.017715941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:39 np0005531754 podman[97578]: 2025-11-22 05:26:39.754886361 +0000 UTC m=+0.114377160 container attach 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:26:40 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 00:26:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 22 00:26:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 00:26:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 22 00:26:41 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 00:26:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 00:26:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 22 00:26:41 np0005531754 friendly_brown[97594]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 22 00:26:41 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 22 00:26:41 np0005531754 systemd[1]: libpod-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope: Deactivated successfully.
Nov 22 00:26:41 np0005531754 podman[97578]: 2025-11-22 05:26:41.315628359 +0000 UTC m=+1.675119148 container died 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:26:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280-merged.mount: Deactivated successfully.
Nov 22 00:26:41 np0005531754 podman[97578]: 2025-11-22 05:26:41.373714488 +0000 UTC m=+1.733205297 container remove 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:26:41 np0005531754 systemd[1]: libpod-conmon-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope: Deactivated successfully.
Nov 22 00:26:41 np0005531754 python3[97656]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:41 np0005531754 podman[97657]: 2025-11-22 05:26:41.721999161 +0000 UTC m=+0.044221749 container create 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:41 np0005531754 systemd[1]: Started libpod-conmon-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope.
Nov 22 00:26:41 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:41 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:41 np0005531754 podman[97657]: 2025-11-22 05:26:41.787983888 +0000 UTC m=+0.110206236 container init 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 00:26:41 np0005531754 podman[97657]: 2025-11-22 05:26:41.79291754 +0000 UTC m=+0.115139878 container start 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:41 np0005531754 podman[97657]: 2025-11-22 05:26:41.700960566 +0000 UTC m=+0.023182924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:41 np0005531754 podman[97657]: 2025-11-22 05:26:41.79650004 +0000 UTC m=+0.118722378 container attach 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 22 00:26:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 22 00:26:43 np0005531754 angry_zhukovsky[97672]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 22 00:26:43 np0005531754 systemd[1]: libpod-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope: Deactivated successfully.
Nov 22 00:26:43 np0005531754 podman[97657]: 2025-11-22 05:26:43.322593447 +0000 UTC m=+1.644815785 container died 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:43 np0005531754 systemd[1]: var-lib-containers-storage-overlay-2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae-merged.mount: Deactivated successfully.
Nov 22 00:26:43 np0005531754 podman[97657]: 2025-11-22 05:26:43.360535792 +0000 UTC m=+1.682758130 container remove 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:26:43 np0005531754 systemd[1]: libpod-conmon-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope: Deactivated successfully.
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:26:43
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:26:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:26:44 np0005531754 python3[97785]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 22 00:26:44 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 00:26:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:44 np0005531754 python3[97856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789203.9993408-36522-241325823660178/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:26:45 np0005531754 python3[97958]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 22 00:26:45 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: Cluster is now healthy
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:45 np0005531754 python3[98033]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789204.9594376-36536-114795781789323/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=6cbec36551ab2122646d939859c1d167146d375a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:26:46 np0005531754 python3[98083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.119234518 +0000 UTC m=+0.056522235 container create ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:46 np0005531754 systemd[1]: Started libpod-conmon-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope.
Nov 22 00:26:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.182250468 +0000 UTC m=+0.119538285 container init ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.189977863 +0000 UTC m=+0.127265580 container start ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.19339555 +0000 UTC m=+0.130683307 container attach ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.099332829 +0000 UTC m=+0.036620576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 22 00:26:46 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 00:26:46 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=9.691080093s) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active pruub 68.570899963s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:46 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=9.691080093s) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown pruub 68.570899963s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:26:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 00:26:46 np0005531754 heuristic_napier[98099]: 
Nov 22 00:26:46 np0005531754 heuristic_napier[98099]: [global]
Nov 22 00:26:46 np0005531754 heuristic_napier[98099]: #011fsid = 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 00:26:46 np0005531754 heuristic_napier[98099]: #011mon_host = 192.168.122.100
Nov 22 00:26:46 np0005531754 systemd[1]: libpod-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope: Deactivated successfully.
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.698382115 +0000 UTC m=+0.635669872 container died ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:26:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266-merged.mount: Deactivated successfully.
Nov 22 00:26:46 np0005531754 podman[98084]: 2025-11-22 05:26:46.754085701 +0000 UTC m=+0.691373428 container remove ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:26:46 np0005531754 systemd[1]: libpod-conmon-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope: Deactivated successfully.
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:47 np0005531754 python3[98240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.180409962 +0000 UTC m=+0.051161464 container create c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:26:47 np0005531754 systemd[1]: Started libpod-conmon-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope.
Nov 22 00:26:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.16612888 +0000 UTC m=+0.036880382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.27255837 +0000 UTC m=+0.143309922 container init c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.280588521 +0000 UTC m=+0.151340033 container start c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.283777433 +0000 UTC m=+0.154528945 container attach c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 22 00:26:47 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:47 np0005531754 podman[98347]: 2025-11-22 05:26:47.62915951 +0000 UTC m=+0.064881393 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:26:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v92: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:47 np0005531754 podman[98347]: 2025-11-22 05:26:47.754955316 +0000 UTC m=+0.190677119 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 22 00:26:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1464073565' entity='client.admin' 
Nov 22 00:26:47 np0005531754 frosty_nobel[98300]: set ssl_option
Nov 22 00:26:47 np0005531754 systemd[1]: libpod-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope: Deactivated successfully.
Nov 22 00:26:47 np0005531754 podman[98260]: 2025-11-22 05:26:47.954458574 +0000 UTC m=+0.825210076 container died c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:26:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c-merged.mount: Deactivated successfully.
Nov 22 00:26:48 np0005531754 podman[98260]: 2025-11-22 05:26:48.00572865 +0000 UTC m=+0.876480162 container remove c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:48 np0005531754 systemd[1]: libpod-conmon-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope: Deactivated successfully.
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 94f6dbbf-31fa-4715-a2a7-52459d366468 does not exist
Nov 22 00:26:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 89e1cd2a-5951-4e74-8294-8fae06c4c8ec does not exist
Nov 22 00:26:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ee54b519-fbba-401e-8162-75efe9bd0aac does not exist
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:48 np0005531754 python3[98525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 22 00:26:48 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/1464073565' entity='client.admin' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=11.716521263s) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 66.521041870s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38 pruub=13.638353348s) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active pruub 68.442977905s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=11.716521263s) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown pruub 66.521041870s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38 pruub=13.638353348s) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown pruub 68.442977905s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 podman[98560]: 2025-11-22 05:26:48.385700016 +0000 UTC m=+0.044077984 container create 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:48 np0005531754 systemd[1]: Started libpod-conmon-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope.
Nov 22 00:26:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:48 np0005531754 podman[98560]: 2025-11-22 05:26:48.367748811 +0000 UTC m=+0.026126799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:48 np0005531754 podman[98560]: 2025-11-22 05:26:48.465011464 +0000 UTC m=+0.123389452 container init 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:26:48 np0005531754 podman[98560]: 2025-11-22 05:26:48.471144992 +0000 UTC m=+0.129522990 container start 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:26:48 np0005531754 podman[98560]: 2025-11-22 05:26:48.475138083 +0000 UTC m=+0.133516051 container attach 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:48 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=9.470194817s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 76.047958374s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:48 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=9.470194817s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown pruub 76.047958374s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.780029017 +0000 UTC m=+0.038167652 container create c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:48 np0005531754 systemd[1]: Started libpod-conmon-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope.
Nov 22 00:26:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.845067082 +0000 UTC m=+0.103205737 container init c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.850024395 +0000 UTC m=+0.108163040 container start c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:48 np0005531754 elastic_hamilton[98722]: 167 167
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.853198286 +0000 UTC m=+0.111336941 container attach c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:48 np0005531754 systemd[1]: libpod-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope: Deactivated successfully.
Nov 22 00:26:48 np0005531754 conmon[98722]: conmon c0d1418aca576a486721 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope/container/memory.events
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.855877807 +0000 UTC m=+0.114016462 container died c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.762069222 +0000 UTC m=+0.020207877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-24f47fe30a3bfec703caa6a161c5113f3eff7f43dd33ed327eff2e64cf0402a1-merged.mount: Deactivated successfully.
Nov 22 00:26:48 np0005531754 podman[98687]: 2025-11-22 05:26:48.890127398 +0000 UTC m=+0.148266033 container remove c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:26:48 np0005531754 systemd[1]: libpod-conmon-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope: Deactivated successfully.
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:49 np0005531754 heuristic_gagarin[98617]: Scheduled rgw.rgw update...
Nov 22 00:26:49 np0005531754 systemd[1]: libpod-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope: Deactivated successfully.
Nov 22 00:26:49 np0005531754 podman[98560]: 2025-11-22 05:26:49.04982924 +0000 UTC m=+0.708207238 container died 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590-merged.mount: Deactivated successfully.
Nov 22 00:26:49 np0005531754 podman[98560]: 2025-11-22 05:26:49.112519862 +0000 UTC m=+0.770897870 container remove 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:49 np0005531754 systemd[1]: libpod-conmon-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope: Deactivated successfully.
Nov 22 00:26:49 np0005531754 podman[98748]: 2025-11-22 05:26:49.144421962 +0000 UTC m=+0.058790957 container create a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:26:49 np0005531754 systemd[1]: Started libpod-conmon-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope.
Nov 22 00:26:49 np0005531754 podman[98748]: 2025-11-22 05:26:49.112760959 +0000 UTC m=+0.027130004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:49 np0005531754 podman[98748]: 2025-11-22 05:26:49.244179801 +0000 UTC m=+0.158548856 container init a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:26:49 np0005531754 podman[98748]: 2025-11-22 05:26:49.252000597 +0000 UTC m=+0.166369592 container start a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:49 np0005531754 podman[98748]: 2025-11-22 05:26:49.255706051 +0000 UTC m=+0.170075046 container attach a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=38/41 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 22 00:26:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 22 00:26:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 124 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:50 np0005531754 python3[98860]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:26:50 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 22 00:26:50 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 22 00:26:50 np0005531754 gallant_wright[98777]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:26:50 np0005531754 gallant_wright[98777]: --> relative data size: 1.0
Nov 22 00:26:50 np0005531754 gallant_wright[98777]: --> All data devices are unavailable
Nov 22 00:26:50 np0005531754 systemd[1]: libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Deactivated successfully.
Nov 22 00:26:50 np0005531754 systemd[1]: libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Consumed 1.010s CPU time.
Nov 22 00:26:50 np0005531754 conmon[98777]: conmon a8e15ff78f17ff8cd97d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope/container/memory.events
Nov 22 00:26:50 np0005531754 podman[98748]: 2025-11-22 05:26:50.322732568 +0000 UTC m=+1.237101523 container died a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:26:50 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974-merged.mount: Deactivated successfully.
Nov 22 00:26:50 np0005531754 systemd[77455]: Starting Mark boot as successful...
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 22 00:26:50 np0005531754 systemd[77455]: Finished Mark boot as successful.
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 22 00:26:50 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=11.733865738s) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active pruub 80.106666565s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:50 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=11.733865738s) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown pruub 80.106666565s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:50 np0005531754 podman[98748]: 2025-11-22 05:26:50.386823152 +0000 UTC m=+1.301192107 container remove a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: Saving service rgw.rgw spec with placement compute-0
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:50 np0005531754 systemd[1]: libpod-conmon-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Deactivated successfully.
Nov 22 00:26:50 np0005531754 python3[98952]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789209.822787-36577-201479021486685/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:26:50 np0005531754 python3[99124]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.018613346 +0000 UTC m=+0.035110442 container create 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:51 np0005531754 systemd[1]: Started libpod-conmon-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope.
Nov 22 00:26:51 np0005531754 podman[99163]: 2025-11-22 05:26:51.062522596 +0000 UTC m=+0.049163579 container create 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:26:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.003559807 +0000 UTC m=+0.020056943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:51 np0005531754 systemd[1]: Started libpod-conmon-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope.
Nov 22 00:26:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.11281795 +0000 UTC m=+0.129315126 container init 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.118678732 +0000 UTC m=+0.135175818 container start 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:51 np0005531754 affectionate_goldwasser[99183]: 167 167
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.122946718 +0000 UTC m=+0.139443884 container attach 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.123383109 +0000 UTC m=+0.139880195 container died 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 00:26:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 22 00:26:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:51 np0005531754 systemd[1]: libpod-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope: Deactivated successfully.
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 podman[99163]: 2025-11-22 05:26:51.045554974 +0000 UTC m=+0.032195987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-671b7fe88a110ed4dc2cd0d9d80fac0f9607796752c67294f0e5d09abe01acf7-merged.mount: Deactivated successfully.
Nov 22 00:26:51 np0005531754 podman[99163]: 2025-11-22 05:26:51.150690194 +0000 UTC m=+0.137331197 container init 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:51 np0005531754 podman[99163]: 2025-11-22 05:26:51.158460399 +0000 UTC m=+0.145101422 container start 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:51 np0005531754 podman[99156]: 2025-11-22 05:26:51.168610318 +0000 UTC m=+0.185107394 container remove 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:26:51 np0005531754 systemd[1]: libpod-conmon-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope: Deactivated successfully.
Nov 22 00:26:51 np0005531754 podman[99163]: 2025-11-22 05:26:51.181670392 +0000 UTC m=+0.168311375 container attach 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 22 00:26:51 np0005531754 podman[99214]: 2025-11-22 05:26:51.378715605 +0000 UTC m=+0.055414071 container create 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=42/43 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 systemd[1]: Started libpod-conmon-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope.
Nov 22 00:26:51 np0005531754 podman[99214]: 2025-11-22 05:26:51.356575366 +0000 UTC m=+0.033273862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:51 np0005531754 podman[99214]: 2025-11-22 05:26:51.490549556 +0000 UTC m=+0.167248032 container init 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:51 np0005531754 podman[99214]: 2025-11-22 05:26:51.499814705 +0000 UTC m=+0.176513151 container start 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:51 np0005531754 podman[99214]: 2025-11-22 05:26:51.503101749 +0000 UTC m=+0.179800205 container attach 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42 pruub=12.465258598s) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active pruub 76.696258545s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42 pruub=12.465258598s) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown pruub 76.696258545s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 00:26:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75836]: 2025-11-22T05:26:51.739+0000 7f1d8fe96640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e2 new map
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T05:26:51.740040+0000#012modified#0112025-11-22T05:26:51.740073+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=42/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:51 np0005531754 systemd[1]: libpod-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope: Deactivated successfully.
Nov 22 00:26:51 np0005531754 podman[99256]: 2025-11-22 05:26:51.851299319 +0000 UTC m=+0.042275994 container died 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd-merged.mount: Deactivated successfully.
Nov 22 00:26:51 np0005531754 podman[99256]: 2025-11-22 05:26:51.889971921 +0000 UTC m=+0.080948566 container remove 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:51 np0005531754 systemd[1]: libpod-conmon-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope: Deactivated successfully.
Nov 22 00:26:52 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 22 00:26:52 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:52 np0005531754 python3[99294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.287592606 +0000 UTC m=+0.044446714 container create b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]: {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    "0": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "devices": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "/dev/loop3"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            ],
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_name": "ceph_lv0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_size": "21470642176",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "name": "ceph_lv0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "tags": {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.crush_device_class": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.encrypted": "0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_id": "0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.vdo": "0"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            },
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "vg_name": "ceph_vg0"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        }
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    ],
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    "1": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "devices": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "/dev/loop4"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            ],
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_name": "ceph_lv1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_size": "21470642176",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "name": "ceph_lv1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "tags": {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.crush_device_class": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.encrypted": "0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_id": "1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.vdo": "0"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            },
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "vg_name": "ceph_vg1"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        }
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    ],
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    "2": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "devices": [
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "/dev/loop5"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            ],
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_name": "ceph_lv2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_size": "21470642176",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "name": "ceph_lv2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "tags": {
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.cluster_name": "ceph",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.crush_device_class": "",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.encrypted": "0",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osd_id": "2",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:                "ceph.vdo": "0"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            },
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "type": "block",
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:            "vg_name": "ceph_vg2"
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:        }
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]:    ]
Nov 22 00:26:52 np0005531754 distracted_neumann[99230]: }
Nov 22 00:26:52 np0005531754 podman[99214]: 2025-11-22 05:26:52.317803267 +0000 UTC m=+0.994501713 container died 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:26:52 np0005531754 systemd[1]: Started libpod-conmon-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope.
Nov 22 00:26:52 np0005531754 systemd[1]: libpod-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope: Deactivated successfully.
Nov 22 00:26:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d-merged.mount: Deactivated successfully.
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.26869 +0000 UTC m=+0.025544108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:52 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:52 np0005531754 podman[99214]: 2025-11-22 05:26:52.373515852 +0000 UTC m=+1.050214318 container remove 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.378127387 +0000 UTC m=+0.134981495 container init b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.385546854 +0000 UTC m=+0.142400952 container start b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.389636726 +0000 UTC m=+0.146490834 container attach b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:52 np0005531754 systemd[1]: libpod-conmon-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope: Deactivated successfully.
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 00:26:52 np0005531754 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:52 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 00:26:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:52 np0005531754 keen_dijkstra[99316]: Scheduled mds.cephfs update...
Nov 22 00:26:52 np0005531754 systemd[1]: libpod-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope: Deactivated successfully.
Nov 22 00:26:52 np0005531754 conmon[99316]: conmon b82f4dbe4310ff7a2554 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope/container/memory.events
Nov 22 00:26:52 np0005531754 podman[99297]: 2025-11-22 05:26:52.988510678 +0000 UTC m=+0.745364776 container died b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:26:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0-merged.mount: Deactivated successfully.
Nov 22 00:26:53 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 22 00:26:53 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.068927741 +0000 UTC m=+0.074089381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:53 np0005531754 podman[99297]: 2025-11-22 05:26:53.165651312 +0000 UTC m=+0.922505450 container remove b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.247662691 +0000 UTC m=+0.252824301 container create e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:26:53 np0005531754 systemd[1]: libpod-conmon-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope: Deactivated successfully.
Nov 22 00:26:53 np0005531754 systemd[1]: Started libpod-conmon-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope.
Nov 22 00:26:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.318832366 +0000 UTC m=+0.323994016 container init e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.3257242 +0000 UTC m=+0.330885770 container start e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.329153588 +0000 UTC m=+0.334315188 container attach e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:26:53 np0005531754 elastic_yalow[99516]: 167 167
Nov 22 00:26:53 np0005531754 systemd[1]: libpod-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope: Deactivated successfully.
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.332714388 +0000 UTC m=+0.337875968 container died e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:26:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-39bc85c58b1979ffc844f60eb9f6af826e3dd56397c6bce89f3d5eaa46606a23-merged.mount: Deactivated successfully.
Nov 22 00:26:53 np0005531754 podman[99490]: 2025-11-22 05:26:53.366116791 +0000 UTC m=+0.371278361 container remove e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:53 np0005531754 systemd[1]: libpod-conmon-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope: Deactivated successfully.
Nov 22 00:26:53 np0005531754 podman[99539]: 2025-11-22 05:26:53.514125468 +0000 UTC m=+0.052841102 container create c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:26:53 np0005531754 systemd[1]: Started libpod-conmon-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope.
Nov 22 00:26:53 np0005531754 podman[99539]: 2025-11-22 05:26:53.485445212 +0000 UTC m=+0.024160896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:53 np0005531754 podman[99539]: 2025-11-22 05:26:53.616436755 +0000 UTC m=+0.155152409 container init c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:26:53 np0005531754 podman[99539]: 2025-11-22 05:26:53.633370676 +0000 UTC m=+0.172086320 container start c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:26:53 np0005531754 podman[99539]: 2025-11-22 05:26:53.637824867 +0000 UTC m=+0.176540521 container attach c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:26:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:53 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 9 completed events
Nov 22 00:26:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:26:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:53 np0005531754 python3[99637]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 00:26:53 np0005531754 ceph-mon[75840]: Saving service mds.cephfs spec with placement compute-0
Nov 22 00:26:53 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:53 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:54 np0005531754 python3[99710]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789213.6421154-36607-184308925635291/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=18cfea5729768871b1211ef73b57421c54974f8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]: {
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_id": 1,
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "type": "bluestore"
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    },
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_id": 2,
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "type": "bluestore"
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    },
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_id": 0,
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:        "type": "bluestore"
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]:    }
Nov 22 00:26:54 np0005531754 friendly_hoover[99555]: }
Nov 22 00:26:54 np0005531754 systemd[1]: libpod-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Deactivated successfully.
Nov 22 00:26:54 np0005531754 podman[99539]: 2025-11-22 05:26:54.650744984 +0000 UTC m=+1.189460628 container died c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:26:54 np0005531754 systemd[1]: libpod-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Consumed 1.030s CPU time.
Nov 22 00:26:54 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e-merged.mount: Deactivated successfully.
Nov 22 00:26:54 np0005531754 podman[99539]: 2025-11-22 05:26:54.73352158 +0000 UTC m=+1.272237224 container remove c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:54 np0005531754 systemd[1]: libpod-conmon-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Deactivated successfully.
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:54 np0005531754 python3[99802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:54 np0005531754 podman[99828]: 2025-11-22 05:26:54.938568082 +0000 UTC m=+0.046561370 container create d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:26:54 np0005531754 systemd[1]: Started libpod-conmon-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope.
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:55 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:54.92028084 +0000 UTC m=+0.028274158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:55.031194981 +0000 UTC m=+0.139188279 container init d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:55.038721121 +0000 UTC m=+0.146714429 container start d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:55.042743622 +0000 UTC m=+0.150736950 container attach d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:55 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 22 00:26:55 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 00:26:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 systemd[1]: libpod-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope: Deactivated successfully.
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:55.679380904 +0000 UTC m=+0.787374212 container died d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92-merged.mount: Deactivated successfully.
Nov 22 00:26:55 np0005531754 podman[99828]: 2025-11-22 05:26:55.74261152 +0000 UTC m=+0.850604838 container remove d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:55 np0005531754 systemd[1]: libpod-conmon-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope: Deactivated successfully.
Nov 22 00:26:55 np0005531754 podman[100078]: 2025-11-22 05:26:55.900115521 +0000 UTC m=+0.072961526 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:26:56 np0005531754 podman[100078]: 2025-11-22 05:26:56.023088994 +0000 UTC m=+0.195935029 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1cb8d91d-e1f1-454b-a180-4a7e6a632fd3 does not exist
Nov 22 00:26:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 08b29d6a-d436-408e-9fa3-f6f30e63c7d7 does not exist
Nov 22 00:26:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 70d5aba2-edf7-4089-8c04-6e19875e0840 does not exist
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:26:56 np0005531754 python3[100211]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.717928886s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.376487732s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.717873573s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.376487732s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724143982s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382774353s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.704218864s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362899780s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724062920s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382781982s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724061012s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382774353s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724007607s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382781982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.704098701s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362899780s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703700066s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362640381s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723844528s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382820129s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703682899s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362640381s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723811150s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382820129s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703653336s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362701416s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703567505s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362632751s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703536034s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362625122s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703519821s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362625122s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703532219s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362632751s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703607559s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362701416s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703354836s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362663269s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703322411s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362663269s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723654747s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383041382s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723637581s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383041382s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723557472s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383026123s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723571777s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383079529s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703066826s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362594604s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723556519s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383079529s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703038216s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362594604s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723469734s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383026123s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702985764s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362594604s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702969551s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362594604s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722650528s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383110046s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722628593s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383117676s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702134132s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362579346s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722602844s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383110046s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722570419s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383117676s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701698303s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362266541s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701620102s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362266541s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701550484s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362266541s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722375870s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383125305s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722352028s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383125305s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701520920s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362266541s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701845169s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362579346s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722254753s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383171082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722234726s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383171082s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700984001s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361976624s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700995445s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361999512s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722096443s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383171082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700943947s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361999512s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722075462s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383171082s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700878143s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361976624s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700733185s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361968994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700714111s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361968994s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700569153s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361900330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700536728s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361900330s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721822739s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383224487s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700509071s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361907959s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721802711s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383224487s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700466156s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361907959s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721702576s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383247375s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700295448s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361862183s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700276375s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361862183s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721673965s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383247375s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700249672s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361892700s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721422195s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383094788s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721402168s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383094788s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700220108s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361892700s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700215340s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361938477s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721586227s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383354187s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700030327s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361839294s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721556664s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383354187s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700012207s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361839294s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721508026s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383384705s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700185776s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361938477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721473694s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383384705s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721433640s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383392334s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721416473s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383392334s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721368790s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383415222s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721340179s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383415222s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078650475s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309280396s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677594185s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908340454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677576065s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908348083s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677559853s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908340454s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677550316s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908348083s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078399658s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309295654s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078375816s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309295654s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078358650s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309280396s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078365326s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309356689s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078348160s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309356689s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677197456s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908218384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677384377s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908439636s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677154541s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908218384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677034378s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908187866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677010536s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908187866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677263260s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908439636s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085411072s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316711426s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676651001s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907966614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085359573s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316711426s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676495552s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907890320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085236549s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316635132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676591873s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907966614s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676454544s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907890320s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085198402s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316635132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676443100s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908004761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676421165s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908004761s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085186958s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316780090s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676184654s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907791138s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085124969s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316749573s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085160255s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316780090s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676139832s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907791138s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085083961s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316780090s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085094452s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316749573s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085066795s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316780090s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085054398s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316856384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085033417s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316856384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085009575s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316909790s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675668716s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907585144s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675601959s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907569885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084976196s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316909790s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675622940s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907585144s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675580025s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907569885s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675228119s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907424927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084730148s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316947937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675197601s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907424927s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084705353s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316947937s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084603310s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316955566s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675089836s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907463074s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084586143s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316955566s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675065041s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907463074s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688663483s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819038391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688179970s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818595886s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688629150s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819038391s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688137054s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818595886s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084455490s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316970825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084383965s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316917419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084434509s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316970825s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674995422s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907539368s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688069344s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818649292s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084353447s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316917419s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674961090s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907539368s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688043594s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818649292s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084322929s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316963196s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084305763s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316963196s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688065529s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818786621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675435066s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908126831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675417900s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908126831s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688025475s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818786621s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674637794s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907394409s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084237099s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317008972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687857628s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818672180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674612045s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907394409s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687729836s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818710327s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687702179s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818710327s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687690735s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818740845s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687647820s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818740845s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687611580s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818794250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687568665s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818794250s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687413216s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818771362s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687615395s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819000244s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687337875s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818771362s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687469482s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819000244s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687012672s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818672180s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084217072s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317008972s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084184647s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317031860s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084166527s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317031860s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685537338s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818832397s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685498238s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818832397s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.685382843s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818870544s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.685358047s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818870544s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685474396s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819129944s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685450554s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819129944s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685056686s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818908691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683683395s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818885803s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683631897s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818885803s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.683634758s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819198608s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.683594704s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819198608s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674372673s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907310486s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683130264s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819015503s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683092117s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819015503s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084068298s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317024231s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674351692s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907310486s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682958603s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819099426s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674348831s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907318115s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084048271s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317024231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674324036s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907318115s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083875656s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317031860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682925224s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819099426s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674061775s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907241821s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083850861s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317031860s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674036026s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907241821s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083807945s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317039490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.684010506s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820343018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083779335s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317039490s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.673927307s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907218933s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683967590s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820343018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682831764s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819534302s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682792664s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819534302s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682342529s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819274902s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.673910141s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907218933s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682275772s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819274902s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682429314s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819602966s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.684988976s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818908691s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682388306s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819602966s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682341576s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819641113s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682229042s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819641113s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682801247s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820190430s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682647705s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820190430s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682138443s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819725037s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682074547s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819725037s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682081223s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819763184s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682004929s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819763184s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682127953s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819679260s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682007790s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819770813s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681927681s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819770813s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681835175s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819679260s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681981087s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820068359s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681766510s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819778442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681697845s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819839478s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681667328s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819862366s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681603432s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819778442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681883812s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820068359s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681626320s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819862366s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681440353s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819862366s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681529999s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819885254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681361198s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819839478s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681388855s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819862366s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681385994s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819885254s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681068420s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819915771s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681196213s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820121765s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681037903s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819915771s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681141853s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820121765s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680937767s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820159912s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681224823s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820449829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681012154s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820243835s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681147575s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820281982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681201935s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820449829s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680883408s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820159912s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680886269s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820198059s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680997849s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680974007s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820343018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680857658s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820198059s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680956841s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820343018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680778503s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820281982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680764198s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820243835s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680745125s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.664406776s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907714844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.664361954s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907714844s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:26:56 np0005531754 podman[100231]: 2025-11-22 05:26:56.744101709 +0000 UTC m=+0.065589370 container create aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:26:56 np0005531754 systemd[1]: Started libpod-conmon-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope.
Nov 22 00:26:56 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:56 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:56 np0005531754 podman[100231]: 2025-11-22 05:26:56.72907716 +0000 UTC m=+0.050564851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:56 np0005531754 podman[100231]: 2025-11-22 05:26:56.82842747 +0000 UTC m=+0.149915161 container init aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:56 np0005531754 podman[100231]: 2025-11-22 05:26:56.8346059 +0000 UTC m=+0.156093561 container start aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:56 np0005531754 podman[100231]: 2025-11-22 05:26:56.838232071 +0000 UTC m=+0.159719752 container attach aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.320119565 +0000 UTC m=+0.047992192 container create 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 00:26:57 np0005531754 systemd[1]: Started libpod-conmon-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope.
Nov 22 00:26:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.299588863 +0000 UTC m=+0.027461500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.403219909 +0000 UTC m=+0.131092536 container init 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.414127314 +0000 UTC m=+0.141999951 container start 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.418106164 +0000 UTC m=+0.145978861 container attach 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:57 np0005531754 goofy_golick[100424]: 167 167
Nov 22 00:26:57 np0005531754 systemd[1]: libpod-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope: Deactivated successfully.
Nov 22 00:26:57 np0005531754 conmon[100424]: conmon 2684653301d781c572d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope/container/memory.events
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.422449592 +0000 UTC m=+0.150322199 container died 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-36cacda25cb0581d93346512f43975724954134e68d20479fa89f4419d898e7c-merged.mount: Deactivated successfully.
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759638' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 00:26:57 np0005531754 goofy_satoshi[100279]: 
Nov 22 00:26:57 np0005531754 goofy_satoshi[100279]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":180,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84213760,"bytes_avail":64327712768,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-22T05:26:55.676392+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b968293a-a6bf-4f5b-a8de-544606057d9d":{"message":"Global Recovery Event (5s)\n      [===================.........] (remaining: 2s)","progress":0.68947368860244751,"add_to_ceph_s":true}}}
Nov 22 00:26:57 np0005531754 podman[100407]: 2025-11-22 05:26:57.48399352 +0000 UTC m=+0.211866157 container remove 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:26:57 np0005531754 systemd[1]: libpod-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope: Deactivated successfully.
Nov 22 00:26:57 np0005531754 podman[100231]: 2025-11-22 05:26:57.488295857 +0000 UTC m=+0.809783528 container died aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:57 np0005531754 systemd[1]: libpod-conmon-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope: Deactivated successfully.
Nov 22 00:26:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e-merged.mount: Deactivated successfully.
Nov 22 00:26:57 np0005531754 podman[100231]: 2025-11-22 05:26:57.535379889 +0000 UTC m=+0.856867560 container remove aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 22 00:26:57 np0005531754 systemd[1]: libpod-conmon-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope: Deactivated successfully.
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 22 00:26:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 22 00:26:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 podman[100460]: 2025-11-22 05:26:57.7172948 +0000 UTC m=+0.077988410 container create b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:57 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:26:57 np0005531754 podman[100460]: 2025-11-22 05:26:57.677933923 +0000 UTC m=+0.038627583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:57 np0005531754 systemd[1]: Started libpod-conmon-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope.
Nov 22 00:26:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:57 np0005531754 podman[100460]: 2025-11-22 05:26:57.838791439 +0000 UTC m=+0.199485109 container init b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:26:57 np0005531754 podman[100460]: 2025-11-22 05:26:57.849170563 +0000 UTC m=+0.209864173 container start b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:57 np0005531754 podman[100460]: 2025-11-22 05:26:57.853396908 +0000 UTC m=+0.214090518 container attach b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:26:57 np0005531754 python3[100503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:57 np0005531754 podman[100507]: 2025-11-22 05:26:57.992033364 +0000 UTC m=+0.052258539 container create f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:58 np0005531754 systemd[1]: Started libpod-conmon-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope.
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:57.964259137 +0000 UTC m=+0.024484352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:58 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:58.091373284 +0000 UTC m=+0.151598489 container init f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:58.101536392 +0000 UTC m=+0.161761567 container start f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:58.106344111 +0000 UTC m=+0.166569336 container attach f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:26:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:26:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250960978' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:26:58 np0005531754 eloquent_davinci[100522]: 
Nov 22 00:26:58 np0005531754 eloquent_davinci[100522]: {"epoch":1,"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","modified":"2025-11-22T05:23:51.756901Z","created":"2025-11-22T05:23:51.756901Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 22 00:26:58 np0005531754 eloquent_davinci[100522]: dumped monmap epoch 1
Nov 22 00:26:58 np0005531754 systemd[1]: libpod-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope: Deactivated successfully.
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:58.772533361 +0000 UTC m=+0.832758516 container died f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:26:58 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event b968293a-a6bf-4f5b-a8de-544606057d9d (Global Recovery Event) in 10 seconds
Nov 22 00:26:58 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244-merged.mount: Deactivated successfully.
Nov 22 00:26:58 np0005531754 podman[100507]: 2025-11-22 05:26:58.816356929 +0000 UTC m=+0.876582054 container remove f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:26:58 np0005531754 systemd[1]: libpod-conmon-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope: Deactivated successfully.
Nov 22 00:26:58 np0005531754 stoic_driscoll[100501]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:26:58 np0005531754 stoic_driscoll[100501]: --> relative data size: 1.0
Nov 22 00:26:58 np0005531754 stoic_driscoll[100501]: --> All data devices are unavailable
Nov 22 00:26:58 np0005531754 systemd[1]: libpod-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Deactivated successfully.
Nov 22 00:26:58 np0005531754 systemd[1]: libpod-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Consumed 1.037s CPU time.
Nov 22 00:26:59 np0005531754 podman[100585]: 2025-11-22 05:26:59.013420952 +0000 UTC m=+0.030296065 container died b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:26:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f-merged.mount: Deactivated successfully.
Nov 22 00:26:59 np0005531754 podman[100585]: 2025-11-22 05:26:59.093314283 +0000 UTC m=+0.110189386 container remove b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:26:59 np0005531754 systemd[1]: libpod-conmon-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Deactivated successfully.
Nov 22 00:26:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 22 00:26:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 22 00:26:59 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 22 00:26:59 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 22 00:26:59 np0005531754 python3[100663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:26:59 np0005531754 podman[100727]: 2025-11-22 05:26:59.496867051 +0000 UTC m=+0.044955884 container create 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:26:59 np0005531754 systemd[1]: Started libpod-conmon-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope.
Nov 22 00:26:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:26:59 np0005531754 podman[100727]: 2025-11-22 05:26:59.479045519 +0000 UTC m=+0.027134372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:26:59 np0005531754 podman[100727]: 2025-11-22 05:26:59.584939327 +0000 UTC m=+0.133028220 container init 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:26:59 np0005531754 podman[100727]: 2025-11-22 05:26:59.592391834 +0000 UTC m=+0.140480667 container start 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:26:59 np0005531754 podman[100727]: 2025-11-22 05:26:59.595654659 +0000 UTC m=+0.143743532 container attach 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:26:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.77317754 +0000 UTC m=+0.055564523 container create 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:26:59 np0005531754 systemd[1]: Started libpod-conmon-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope.
Nov 22 00:26:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.745298191 +0000 UTC m=+0.027685254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.844920658 +0000 UTC m=+0.127307631 container init 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.854166796 +0000 UTC m=+0.136553799 container start 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:26:59 np0005531754 peaceful_mestorf[100801]: 167 167
Nov 22 00:26:59 np0005531754 systemd[1]: libpod-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope: Deactivated successfully.
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.858692888 +0000 UTC m=+0.141079881 container attach 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.85921026 +0000 UTC m=+0.141597233 container died 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:26:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8a42e7c4cfbc01aa08cff93574476b5171b87bf878febfebade03eeb4dd2d26d-merged.mount: Deactivated successfully.
Nov 22 00:26:59 np0005531754 podman[100784]: 2025-11-22 05:26:59.890132187 +0000 UTC m=+0.172519160 container remove 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:26:59 np0005531754 systemd[1]: libpod-conmon-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope: Deactivated successfully.
Nov 22 00:27:00 np0005531754 podman[100844]: 2025-11-22 05:27:00.092692225 +0000 UTC m=+0.052484315 container create 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:00 np0005531754 systemd[1]: Started libpod-conmon-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope.
Nov 22 00:27:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 22 00:27:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3465824516' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 00:27:00 np0005531754 vigorous_chatterjee[100742]: [client.openstack]
Nov 22 00:27:00 np0005531754 vigorous_chatterjee[100742]: #011key = AQDNSCFpAAAAABAAIxLSh4M1I5A41RBE4yCAiQ==
Nov 22 00:27:00 np0005531754 vigorous_chatterjee[100742]: #011caps mgr = "allow *"
Nov 22 00:27:00 np0005531754 vigorous_chatterjee[100742]: #011caps mon = "profile rbd"
Nov 22 00:27:00 np0005531754 vigorous_chatterjee[100742]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 22 00:27:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:00 np0005531754 podman[100844]: 2025-11-22 05:27:00.066901683 +0000 UTC m=+0.026693803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 22 00:27:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 22 00:27:00 np0005531754 podman[100844]: 2025-11-22 05:27:00.178180482 +0000 UTC m=+0.137972572 container init 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:27:00 np0005531754 systemd[1]: libpod-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope: Deactivated successfully.
Nov 22 00:27:00 np0005531754 podman[100727]: 2025-11-22 05:27:00.180159986 +0000 UTC m=+0.728248819 container died 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:00 np0005531754 podman[100844]: 2025-11-22 05:27:00.187830089 +0000 UTC m=+0.147622179 container start 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:00 np0005531754 podman[100844]: 2025-11-22 05:27:00.192365462 +0000 UTC m=+0.152157592 container attach 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:27:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564-merged.mount: Deactivated successfully.
Nov 22 00:27:00 np0005531754 podman[100727]: 2025-11-22 05:27:00.223855241 +0000 UTC m=+0.771944064 container remove 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 00:27:00 np0005531754 systemd[1]: libpod-conmon-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope: Deactivated successfully.
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]: {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    "0": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "devices": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "/dev/loop3"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            ],
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_name": "ceph_lv0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_size": "21470642176",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "name": "ceph_lv0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "tags": {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.crush_device_class": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.encrypted": "0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_id": "0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.vdo": "0"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            },
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "vg_name": "ceph_vg0"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        }
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    ],
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    "1": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "devices": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "/dev/loop4"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            ],
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_name": "ceph_lv1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_size": "21470642176",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "name": "ceph_lv1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "tags": {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.crush_device_class": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.encrypted": "0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_id": "1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.vdo": "0"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            },
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "vg_name": "ceph_vg1"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        }
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    ],
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    "2": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "devices": [
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "/dev/loop5"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            ],
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_name": "ceph_lv2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_size": "21470642176",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "name": "ceph_lv2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "tags": {
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.crush_device_class": "",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.encrypted": "0",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osd_id": "2",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:                "ceph.vdo": "0"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            },
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "type": "block",
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:            "vg_name": "ceph_vg2"
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:        }
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]:    ]
Nov 22 00:27:00 np0005531754 adoring_shaw[100860]: }
Nov 22 00:27:01 np0005531754 systemd[1]: libpod-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope: Deactivated successfully.
Nov 22 00:27:01 np0005531754 podman[100844]: 2025-11-22 05:27:01.021149727 +0000 UTC m=+0.980941937 container died 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:27:01 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3465824516' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 00:27:01 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc-merged.mount: Deactivated successfully.
Nov 22 00:27:01 np0005531754 podman[100844]: 2025-11-22 05:27:01.084683489 +0000 UTC m=+1.044475579 container remove 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:01 np0005531754 systemd[1]: libpod-conmon-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope: Deactivated successfully.
Nov 22 00:27:01 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 22 00:27:01 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 22 00:27:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:01 np0005531754 ansible-async_wrapper.py[101164]: Invoked with j522531097896 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789221.252235-36679-197720406409152/AnsiballZ_command.py _
Nov 22 00:27:01 np0005531754 ansible-async_wrapper.py[101201]: Starting module and watcher
Nov 22 00:27:01 np0005531754 ansible-async_wrapper.py[101201]: Start watching 101202 (30)
Nov 22 00:27:01 np0005531754 ansible-async_wrapper.py[101202]: Start module (101202)
Nov 22 00:27:01 np0005531754 ansible-async_wrapper.py[101164]: Return async_wrapper task started.
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.72428854 +0000 UTC m=+0.058934330 container create fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:27:01 np0005531754 systemd[1]: Started libpod-conmon-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope.
Nov 22 00:27:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.707398249 +0000 UTC m=+0.042044059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.818052014 +0000 UTC m=+0.152697884 container init fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.830014993 +0000 UTC m=+0.164660793 container start fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.834591077 +0000 UTC m=+0.169236917 container attach fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:27:01 np0005531754 trusting_lovelace[101206]: 167 167
Nov 22 00:27:01 np0005531754 systemd[1]: libpod-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope: Deactivated successfully.
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.83785074 +0000 UTC m=+0.172496580 container died fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:01 np0005531754 python3[101203]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:01 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8b58bf21f56f54a7722dabc7c45044c1eecdfcc0c9a1bfee52a2b2ff12868299-merged.mount: Deactivated successfully.
Nov 22 00:27:01 np0005531754 podman[101186]: 2025-11-22 05:27:01.883397336 +0000 UTC m=+0.218043136 container remove fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 00:27:01 np0005531754 systemd[1]: libpod-conmon-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope: Deactivated successfully.
Nov 22 00:27:01 np0005531754 podman[101215]: 2025-11-22 05:27:01.926688782 +0000 UTC m=+0.052847771 container create 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:01 np0005531754 systemd[1]: Started libpod-conmon-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope.
Nov 22 00:27:01 np0005531754 podman[101215]: 2025-11-22 05:27:01.902427656 +0000 UTC m=+0.028586705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 podman[101215]: 2025-11-22 05:27:02.016428246 +0000 UTC m=+0.142587245 container init 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:27:02 np0005531754 podman[101215]: 2025-11-22 05:27:02.02413702 +0000 UTC m=+0.150295979 container start 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:27:02 np0005531754 podman[101215]: 2025-11-22 05:27:02.028824825 +0000 UTC m=+0.154983814 container attach 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:27:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:02 np0005531754 podman[101251]: 2025-11-22 05:27:02.06096748 +0000 UTC m=+0.046430748 container create e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:02 np0005531754 systemd[1]: Started libpod-conmon-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope.
Nov 22 00:27:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:02 np0005531754 podman[101251]: 2025-11-22 05:27:02.040058388 +0000 UTC m=+0.025521686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:02 np0005531754 podman[101251]: 2025-11-22 05:27:02.166214762 +0000 UTC m=+0.151678070 container init e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:27:02 np0005531754 podman[101251]: 2025-11-22 05:27:02.177702872 +0000 UTC m=+0.163166140 container start e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:27:02 np0005531754 podman[101251]: 2025-11-22 05:27:02.181807755 +0000 UTC m=+0.167271043 container attach e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 22 00:27:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:27:02 np0005531754 quizzical_boyd[101243]: 
Nov 22 00:27:02 np0005531754 quizzical_boyd[101243]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 00:27:02 np0005531754 systemd[1]: libpod-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope: Deactivated successfully.
Nov 22 00:27:02 np0005531754 podman[101215]: 2025-11-22 05:27:02.598688043 +0000 UTC m=+0.724847062 container died 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562-merged.mount: Deactivated successfully.
Nov 22 00:27:02 np0005531754 podman[101215]: 2025-11-22 05:27:02.65219722 +0000 UTC m=+0.778356209 container remove 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:02 np0005531754 systemd[1]: libpod-conmon-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope: Deactivated successfully.
Nov 22 00:27:02 np0005531754 ansible-async_wrapper.py[101202]: Module complete (101202)
Nov 22 00:27:03 np0005531754 python3[101362]: ansible-ansible.legacy.async_status Invoked with jid=j522531097896.101164 mode=status _async_dir=/root/.ansible_async
Nov 22 00:27:03 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 22 00:27:03 np0005531754 busy_panini[101268]: {
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_id": 1,
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "type": "bluestore"
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    },
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_id": 2,
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "type": "bluestore"
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    },
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_id": 0,
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:03 np0005531754 busy_panini[101268]:        "type": "bluestore"
Nov 22 00:27:03 np0005531754 busy_panini[101268]:    }
Nov 22 00:27:03 np0005531754 busy_panini[101268]: }
Nov 22 00:27:03 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 22 00:27:03 np0005531754 systemd[1]: libpod-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope: Deactivated successfully.
Nov 22 00:27:03 np0005531754 podman[101251]: 2025-11-22 05:27:03.143367493 +0000 UTC m=+1.128830791 container died e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 00:27:03 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252-merged.mount: Deactivated successfully.
Nov 22 00:27:03 np0005531754 podman[101251]: 2025-11-22 05:27:03.215184733 +0000 UTC m=+1.200648011 container remove e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:03 np0005531754 systemd[1]: libpod-conmon-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope: Deactivated successfully.
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:03 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:03 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 00:27:03 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 00:27:03 np0005531754 python3[101441]: ansible-ansible.legacy.async_status Invoked with jid=j522531097896.101164 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 00:27:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:03 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 10 completed events
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:27:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:03 np0005531754 podman[101608]: 2025-11-22 05:27:03.906404716 +0000 UTC m=+0.045550748 container create 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:27:03 np0005531754 systemd[1]: Started libpod-conmon-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope.
Nov 22 00:27:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:03 np0005531754 podman[101608]: 2025-11-22 05:27:03.891680794 +0000 UTC m=+0.030826846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:04 np0005531754 podman[101608]: 2025-11-22 05:27:04.001143552 +0000 UTC m=+0.140289644 container init 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:27:04 np0005531754 podman[101608]: 2025-11-22 05:27:04.011196188 +0000 UTC m=+0.150342260 container start 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 00:27:04 np0005531754 focused_elbakyan[101624]: 167 167
Nov 22 00:27:04 np0005531754 python3[101602]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:04 np0005531754 systemd[1]: libpod-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope: Deactivated successfully.
Nov 22 00:27:04 np0005531754 podman[101608]: 2025-11-22 05:27:04.015776791 +0000 UTC m=+0.154922833 container attach 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:04 np0005531754 podman[101608]: 2025-11-22 05:27:04.020773654 +0000 UTC m=+0.159919696 container died 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-268f688e7b7042c4bbdb36faeb6b57fe3992c1b0c2329a8f89183ab2d2d489f4-merged.mount: Deactivated successfully.
Nov 22 00:27:04 np0005531754 podman[101608]: 2025-11-22 05:27:04.066776542 +0000 UTC m=+0.205922604 container remove 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:04 np0005531754 systemd[1]: libpod-conmon-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope: Deactivated successfully.
Nov 22 00:27:04 np0005531754 podman[101630]: 2025-11-22 05:27:04.094486047 +0000 UTC m=+0.054546171 container create 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:27:04 np0005531754 systemd[1]: Started libpod-conmon-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope.
Nov 22 00:27:04 np0005531754 systemd[1]: Reloading.
Nov 22 00:27:04 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 22 00:27:04 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 22 00:27:04 np0005531754 podman[101630]: 2025-11-22 05:27:04.077563575 +0000 UTC m=+0.037623749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:04 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:27:04 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 00:27:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:04 np0005531754 podman[101630]: 2025-11-22 05:27:04.422459321 +0000 UTC m=+0.382519505 container init 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:04 np0005531754 podman[101630]: 2025-11-22 05:27:04.435710769 +0000 UTC m=+0.395770933 container start 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:27:04 np0005531754 podman[101630]: 2025-11-22 05:27:04.439803771 +0000 UTC m=+0.399863925 container attach 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:27:04 np0005531754 systemd[1]: Reloading.
Nov 22 00:27:04 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:27:04 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:27:04 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 22 00:27:04 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 22 00:27:04 np0005531754 systemd[1]: Starting Ceph rgw.rgw.compute-0.pzxxqv for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:27:05 np0005531754 sad_babbage[101659]: 
Nov 22 00:27:05 np0005531754 sad_babbage[101659]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 00:27:05 np0005531754 systemd[1]: libpod-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope: Deactivated successfully.
Nov 22 00:27:05 np0005531754 podman[101804]: 2025-11-22 05:27:05.103319191 +0000 UTC m=+0.066769816 container create 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:27:05 np0005531754 podman[101816]: 2025-11-22 05:27:05.120109599 +0000 UTC m=+0.033680110 container died 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03-merged.mount: Deactivated successfully.
Nov 22 00:27:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pzxxqv supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:05 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Nov 22 00:27:05 np0005531754 podman[101804]: 2025-11-22 05:27:05.064940896 +0000 UTC m=+0.028391561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:05 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 22 00:27:05 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Nov 22 00:27:05 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 22 00:27:05 np0005531754 podman[101804]: 2025-11-22 05:27:05.180707145 +0000 UTC m=+0.144157730 container init 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:27:05 np0005531754 podman[101816]: 2025-11-22 05:27:05.185113425 +0000 UTC m=+0.098683946 container remove 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:27:05 np0005531754 podman[101804]: 2025-11-22 05:27:05.18711518 +0000 UTC m=+0.150565775 container start 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:05 np0005531754 bash[101804]: 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df
Nov 22 00:27:05 np0005531754 systemd[1]: libpod-conmon-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope: Deactivated successfully.
Nov 22 00:27:05 np0005531754 systemd[1]: Started Ceph rgw.rgw.compute-0.pzxxqv for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:05 np0005531754 radosgw[101838]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:27:05 np0005531754 radosgw[101838]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 22 00:27:05 np0005531754 radosgw[101838]: framework: beast
Nov 22 00:27:05 np0005531754 radosgw[101838]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 22 00:27:05 np0005531754 radosgw[101838]: init_numa not setting numa affinity
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 00:27:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:05 np0005531754 podman[102041]: 2025-11-22 05:27:05.960395634 +0000 UTC m=+0.056367462 container create 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:06 np0005531754 systemd[1]: Started libpod-conmon-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope.
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:05.931719618 +0000 UTC m=+0.027691526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:06.074993948 +0000 UTC m=+0.170965826 container init 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:06.088957782 +0000 UTC m=+0.184929610 container start 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:06.092680797 +0000 UTC m=+0.188652685 container attach 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:27:06 np0005531754 ecstatic_wescoff[102083]: 167 167
Nov 22 00:27:06 np0005531754 systemd[1]: libpod-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope: Deactivated successfully.
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:06.099991681 +0000 UTC m=+0.195963519 container died 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:27:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5beddd90358ee8183e9fabe4f5a39620ddaf266fb03c2edef7a42d318af69935-merged.mount: Deactivated successfully.
Nov 22 00:27:06 np0005531754 podman[102041]: 2025-11-22 05:27:06.140635598 +0000 UTC m=+0.236607426 container remove 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:06 np0005531754 systemd[1]: libpod-conmon-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope: Deactivated successfully.
Nov 22 00:27:06 np0005531754 systemd[1]: Reloading.
Nov 22 00:27:06 np0005531754 python3[102085]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 00:27:06 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:27:06 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:27:06 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:06 np0005531754 podman[102106]: 2025-11-22 05:27:06.302149409 +0000 UTC m=+0.060858003 container create 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: Saving service rgw.rgw spec with placement compute-0
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 00:27:06 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 00:27:06 np0005531754 podman[102106]: 2025-11-22 05:27:06.267544629 +0000 UTC m=+0.026253273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:06 np0005531754 systemd[1]: Started libpod-conmon-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope.
Nov 22 00:27:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:06 np0005531754 podman[102106]: 2025-11-22 05:27:06.558877377 +0000 UTC m=+0.317586051 container init 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:27:06 np0005531754 systemd[1]: Reloading.
Nov 22 00:27:06 np0005531754 podman[102106]: 2025-11-22 05:27:06.56697832 +0000 UTC m=+0.325686924 container start 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:06 np0005531754 podman[102106]: 2025-11-22 05:27:06.569698912 +0000 UTC m=+0.328407566 container attach 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:06 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:27:06 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:27:06 np0005531754 ansible-async_wrapper.py[101201]: Done in kid B.
Nov 22 00:27:06 np0005531754 systemd[1]: Starting Ceph mds.cephfs.compute-0.dntioh for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:07 np0005531754 podman[102265]: 2025-11-22 05:27:07.09170455 +0000 UTC m=+0.055114674 container create 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:27:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:27:07 np0005531754 recursing_lewin[102155]: 
Nov 22 00:27:07 np0005531754 recursing_lewin[102155]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 22 00:27:07 np0005531754 systemd[1]: libpod-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope: Deactivated successfully.
Nov 22 00:27:07 np0005531754 podman[102106]: 2025-11-22 05:27:07.12453483 +0000 UTC m=+0.883243474 container died 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:27:07 np0005531754 podman[102265]: 2025-11-22 05:27:07.063117225 +0000 UTC m=+0.026527329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4-merged.mount: Deactivated successfully.
Nov 22 00:27:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.dntioh supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:07 np0005531754 podman[102265]: 2025-11-22 05:27:07.189807031 +0000 UTC m=+0.153217165 container init 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:27:07 np0005531754 podman[102106]: 2025-11-22 05:27:07.194232102 +0000 UTC m=+0.952940706 container remove 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:07 np0005531754 podman[102265]: 2025-11-22 05:27:07.197789682 +0000 UTC m=+0.161199776 container start 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:27:07 np0005531754 bash[102265]: 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf
Nov 22 00:27:07 np0005531754 systemd[1]: Started Ceph mds.cephfs.compute-0.dntioh for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 00:27:07 np0005531754 systemd[1]: libpod-conmon-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope: Deactivated successfully.
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: main not setting numa affinity
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: pidfile_write: ignore empty --pid-file
Nov 22 00:27:07 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh[102286]: starting mds.cephfs.compute-0.dntioh at 
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 2 from mon.0
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 00:27:07 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 new map
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T05:26:51.740040+0000#012modified#0112025-11-22T05:26:51.740073+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.dntioh{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 3 from mon.0
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Monitors have assigned me to become a standby.
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:boot
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] as mds.0
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dntioh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.dntioh"} v 0) v1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dntioh"}]: dispatch
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 all = 0
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e4 new map
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T05:26:51.740040+0000#012modified#0112025-11-22T05:27:07.358183+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dntioh{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 4 from mon.0
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x1
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:creating}
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x100
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x600
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x601
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x602
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x603
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x604
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x605
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x606
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x607
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x608
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x609
Nov 22 00:27:07 np0005531754 ceph-mds[102299]: mds.0.4 creating_done
Nov 22 00:27:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dntioh is now active in filesystem cephfs as rank 0
Nov 22 00:27:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v111: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 22 00:27:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 22 00:27:08 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 22 00:27:08 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 22 00:27:08 np0005531754 python3[102556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 00:27:08 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:08 np0005531754 podman[102578]: 2025-11-22 05:27:08.320972704 +0000 UTC m=+0.065368774 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: daemon mds.cephfs.compute-0.dntioh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: Cluster is now healthy
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: daemon mds.cephfs.compute-0.dntioh is now active in filesystem cephfs as rank 0
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e5 new map
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T05:26:51.740040+0000#012modified#0112025-11-22T05:27:08.364438+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dntioh{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 22 00:27:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 5 from mon.0
Nov 22 00:27:08 np0005531754 ceph-mds[102299]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 00:27:08 np0005531754 ceph-mds[102299]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 22 00:27:08 np0005531754 ceph-mds[102299]: mds.0.4 recovery_done -- successful recovery!
Nov 22 00:27:08 np0005531754 ceph-mds[102299]: mds.0.4 active_start
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:active
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:active}
Nov 22 00:27:08 np0005531754 podman[102592]: 2025-11-22 05:27:08.378360118 +0000 UTC m=+0.069783665 container create 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:27:08 np0005531754 systemd[1]: Started libpod-conmon-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope.
Nov 22 00:27:08 np0005531754 podman[102578]: 2025-11-22 05:27:08.44184752 +0000 UTC m=+0.186243550 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:27:08 np0005531754 podman[102592]: 2025-11-22 05:27:08.350638833 +0000 UTC m=+0.042062380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:08 np0005531754 podman[102592]: 2025-11-22 05:27:08.478785482 +0000 UTC m=+0.170209019 container init 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:27:08 np0005531754 podman[102592]: 2025-11-22 05:27:08.48624563 +0000 UTC m=+0.177669157 container start 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:27:08 np0005531754 podman[102592]: 2025-11-22 05:27:08.490880785 +0000 UTC m=+0.182304332 container attach 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:27:08 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 22 00:27:08 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 22 00:27:08 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 12 completed events
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:27:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 00:27:09 np0005531754 sleepy_panini[102615]: 
Nov 22 00:27:09 np0005531754 sleepy_panini[102615]: [{"container_id": "c4eec30b75a2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.46%", "created": "2025-11-22T05:25:10.945432Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-22T05:25:11.014820Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600973Z", "memory_usage": 11806965, "ports": [], "service_name": "crash", "started": "2025-11-22T05:25:10.830213Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.dntioh", "daemon_name": "mds.cephfs.compute-0.dntioh", "daemon_type": "mds", "events": ["2025-11-22T05:27:07.278515Z daemon:mds.cephfs.compute-0.dntioh [INFO] \"Deployed mds.cephfs.compute-0.dntioh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "73442774e724", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.50%", "created": "2025-11-22T05:23:59.278535Z", "daemon_id": "compute-0.mscchl", "daemon_name": "mgr.compute-0.mscchl", "daemon_type": "mgr", "events": ["2025-11-22T05:25:16.530913Z daemon:mgr.compute-0.mscchl [INFO] \"Reconfigured mgr.compute-0.mscchl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600830Z", "memory_usage": 549348966, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-22T05:23:59.138975Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.mscchl", "version": "18.2.7"}, {"container_id": "d2c85725d384", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.10%", "created": "2025-11-22T05:23:53.991399Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-22T05:25:15.809351Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600653Z", "memory_request": 2147483648, "memory_usage": 39122370, "ports": [], "service_name": "mon", "started": "2025-11-22T05:23:56.813694Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0", "version": "18.2.7"}, {"container_id": "49ecd6cb38e9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.55%", "created": "2025-11-22T05:25:40.511266Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-22T05:25:40.576194Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601130Z", "memory_request": 4294967296, "memory_usage": 61205381, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:40.379797Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.0", "version": "18.2.7"}, {"container_id": "4bf032245a15", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.64%", "created": "2025-11-22T05:25:45.841430Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-22T05:25:45.992780Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601262Z", "memory_request": 4294967296, "memory_usage": 61383639, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:45.632228Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.1", "version": "18.2.7"}, {"container_id": "320c74d22126", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.63%", "created": "2025-11-22T05:25:52.068622Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-22T05:25:52.180845Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601392Z", "memory_request": 4294967296, "memory_usage": 60639150, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:51.901750Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.pzxxqv", "daemon_name": "rgw.rgw.compute-0.pzxxqv", "daemon_type": "rgw", "events": ["2025-11-22T05:27:05.269512Z daemon:rgw.rgw.compute-0.pzxxqv [INFO] \"Deployed rgw.rgw.compute-0.pzxxqv on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 22 00:27:09 np0005531754 systemd[1]: libpod-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope: Deactivated successfully.
Nov 22 00:27:09 np0005531754 podman[102592]: 2025-11-22 05:27:09.06770049 +0000 UTC m=+0.759124007 container died 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:27:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4-merged.mount: Deactivated successfully.
Nov 22 00:27:09 np0005531754 podman[102592]: 2025-11-22 05:27:09.109578754 +0000 UTC m=+0.801002311 container remove 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:09 np0005531754 systemd[1]: libpod-conmon-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope: Deactivated successfully.
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:09 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:27:09 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 62597db3-339b-4b60-9b3a-b3043ed8ca5d does not exist
Nov 22 00:27:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ca0b2785-b168-4072-b6da-a447e2a1e786 does not exist
Nov 22 00:27:09 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f9a9256c-4b26-48ec-bcb9-99df7376f551 does not exist
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 22 00:27:09 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:27:09 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 00:27:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v114: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.876435712 +0000 UTC m=+0.057812874 container create f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:09 np0005531754 systemd[1]: Started libpod-conmon-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope.
Nov 22 00:27:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.848882422 +0000 UTC m=+0.030259624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.961286056 +0000 UTC m=+0.142663308 container init f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.971925505 +0000 UTC m=+0.153302667 container start f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.97654547 +0000 UTC m=+0.157922712 container attach f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:27:09 np0005531754 infallible_swartz[102949]: 167 167
Nov 22 00:27:09 np0005531754 systemd[1]: libpod-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope: Deactivated successfully.
Nov 22 00:27:09 np0005531754 podman[102933]: 2025-11-22 05:27:09.978201907 +0000 UTC m=+0.159579069 container died f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 00:27:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ed5e1b8f5fb36b6e1ee923c627a4e65263354c48255deba193ee33624445d67b-merged.mount: Deactivated successfully.
Nov 22 00:27:10 np0005531754 podman[102933]: 2025-11-22 05:27:10.029748529 +0000 UTC m=+0.211125691 container remove f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:10 np0005531754 systemd[1]: libpod-conmon-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope: Deactivated successfully.
Nov 22 00:27:10 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 22 00:27:10 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 22 00:27:10 np0005531754 python3[102993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:10 np0005531754 podman[103005]: 2025-11-22 05:27:10.262771303 +0000 UTC m=+0.048567327 container create edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:27:10 np0005531754 podman[102999]: 2025-11-22 05:27:10.271144811 +0000 UTC m=+0.067238677 container create 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 22 00:27:10 np0005531754 systemd[1]: Started libpod-conmon-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope.
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 22 00:27:10 np0005531754 systemd[1]: Started libpod-conmon-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope.
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 00:27:10 np0005531754 podman[103005]: 2025-11-22 05:27:10.240202124 +0000 UTC m=+0.025998148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:10 np0005531754 podman[102999]: 2025-11-22 05:27:10.245957564 +0000 UTC m=+0.042051400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 00:27:10 np0005531754 podman[103005]: 2025-11-22 05:27:10.391796941 +0000 UTC m=+0.177593015 container init edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:10 np0005531754 podman[102999]: 2025-11-22 05:27:10.398322779 +0000 UTC m=+0.194416695 container init 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:10 np0005531754 podman[103005]: 2025-11-22 05:27:10.407163308 +0000 UTC m=+0.192959342 container start edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:27:10 np0005531754 podman[102999]: 2025-11-22 05:27:10.412951498 +0000 UTC m=+0.209045374 container start 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:10 np0005531754 podman[103005]: 2025-11-22 05:27:10.41342985 +0000 UTC m=+0.199225874 container attach edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:10 np0005531754 podman[102999]: 2025-11-22 05:27:10.417783418 +0000 UTC m=+0.213877334 container attach 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 00:27:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225977205' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 00:27:10 np0005531754 laughing_moser[103032]: 
Nov 22 00:27:10 np0005531754 laughing_moser[103032]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":193,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":2}],"num_pgs":195,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":84307968,"bytes_avail":64327618560,"bytes_total":64411926528,"unknown_pgs_ratio":0.010256410576403141},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.dntioh","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-22T05:27:01.678326+0000","services":{}},"progress_events":{}}
Nov 22 00:27:11 np0005531754 systemd[1]: libpod-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope: Deactivated successfully.
Nov 22 00:27:11 np0005531754 conmon[103032]: conmon edff554f5fff9a1f4820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope/container/memory.events
Nov 22 00:27:11 np0005531754 podman[103061]: 2025-11-22 05:27:11.076320554 +0000 UTC m=+0.041992688 container died edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f-merged.mount: Deactivated successfully.
Nov 22 00:27:11 np0005531754 podman[103061]: 2025-11-22 05:27:11.130788442 +0000 UTC m=+0.096460556 container remove edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:11 np0005531754 systemd[1]: libpod-conmon-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope: Deactivated successfully.
Nov 22 00:27:11 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 22 00:27:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 00:27:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 22 00:27:11 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 22 00:27:11 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:11 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 00:27:11 np0005531754 priceless_ardinghelli[103031]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:27:11 np0005531754 priceless_ardinghelli[103031]: --> relative data size: 1.0
Nov 22 00:27:11 np0005531754 priceless_ardinghelli[103031]: --> All data devices are unavailable
Nov 22 00:27:11 np0005531754 systemd[1]: libpod-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Deactivated successfully.
Nov 22 00:27:11 np0005531754 podman[102999]: 2025-11-22 05:27:11.540705764 +0000 UTC m=+1.336799610 container died 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:27:11 np0005531754 systemd[1]: libpod-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Consumed 1.056s CPU time.
Nov 22 00:27:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a-merged.mount: Deactivated successfully.
Nov 22 00:27:11 np0005531754 podman[102999]: 2025-11-22 05:27:11.609998286 +0000 UTC m=+1.406092112 container remove 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:27:11 np0005531754 systemd[1]: libpod-conmon-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Deactivated successfully.
Nov 22 00:27:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v117: 196 pgs: 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 00:27:11 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 22 00:27:11 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:12 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 22 00:27:12 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 22 00:27:12 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 22 00:27:12 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 22 00:27:12 np0005531754 python3[103253]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.326739035 +0000 UTC m=+0.057708262 container create 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.357579281 +0000 UTC m=+0.073210812 container create e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 00:27:12 np0005531754 systemd[1]: Started libpod-conmon-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope.
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.298067379 +0000 UTC m=+0.029036686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:12 np0005531754 systemd[1]: Started libpod-conmon-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope.
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.328463185 +0000 UTC m=+0.044094746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.425815299 +0000 UTC m=+0.156784536 container init 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.4334083 +0000 UTC m=+0.164377537 container start 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.439391456 +0000 UTC m=+0.170360683 container attach 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.450119437 +0000 UTC m=+0.165750998 container init e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.460327348 +0000 UTC m=+0.175958909 container start e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 00:27:12 np0005531754 fervent_cohen[103327]: 167 167
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.467650283 +0000 UTC m=+0.183281834 container attach e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:12 np0005531754 systemd[1]: libpod-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope: Deactivated successfully.
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.468580024 +0000 UTC m=+0.184211585 container died e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:27:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e8c8703731dcde36df601d2cb8cf9f9a4ce574310fccf79d96a9191ad56897e9-merged.mount: Deactivated successfully.
Nov 22 00:27:12 np0005531754 podman[103293]: 2025-11-22 05:27:12.526907108 +0000 UTC m=+0.242538659 container remove e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:27:12 np0005531754 systemd[1]: libpod-conmon-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope: Deactivated successfully.
Nov 22 00:27:12 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 22 00:27:12 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:12 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 22 00:27:12 np0005531754 podman[103352]: 2025-11-22 05:27:12.739854739 +0000 UTC m=+0.059734097 container create 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:27:12 np0005531754 systemd[1]: Started libpod-conmon-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope.
Nov 22 00:27:12 np0005531754 podman[103352]: 2025-11-22 05:27:12.714896947 +0000 UTC m=+0.034776385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:12 np0005531754 podman[103352]: 2025-11-22 05:27:12.849671795 +0000 UTC m=+0.169551233 container init 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:27:12 np0005531754 podman[103352]: 2025-11-22 05:27:12.863010106 +0000 UTC m=+0.182889464 container start 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:12 np0005531754 podman[103352]: 2025-11-22 05:27:12.867819125 +0000 UTC m=+0.187698513 container attach 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 00:27:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021684245' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 00:27:12 np0005531754 magical_wright[103322]: 
Nov 22 00:27:12 np0005531754 magical_wright[103322]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pzxxqv","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 22 00:27:12 np0005531754 systemd[1]: libpod-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope: Deactivated successfully.
Nov 22 00:27:12 np0005531754 podman[103292]: 2025-11-22 05:27:12.987366639 +0000 UTC m=+0.718335846 container died 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209-merged.mount: Deactivated successfully.
Nov 22 00:27:13 np0005531754 podman[103292]: 2025-11-22 05:27:13.037778116 +0000 UTC m=+0.768747363 container remove 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:27:13 np0005531754 systemd[1]: libpod-conmon-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope: Deactivated successfully.
Nov 22 00:27:13 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 22 00:27:13 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 22 00:27:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 00:27:13 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]: {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    "0": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "devices": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "/dev/loop3"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            ],
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_name": "ceph_lv0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_size": "21470642176",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "name": "ceph_lv0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "tags": {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.crush_device_class": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.encrypted": "0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_id": "0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.vdo": "0"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            },
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "vg_name": "ceph_vg0"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        }
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    ],
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    "1": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "devices": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "/dev/loop4"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            ],
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_name": "ceph_lv1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_size": "21470642176",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "name": "ceph_lv1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "tags": {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.crush_device_class": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.encrypted": "0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_id": "1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.vdo": "0"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            },
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "vg_name": "ceph_vg1"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        }
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    ],
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    "2": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "devices": [
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "/dev/loop5"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            ],
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_name": "ceph_lv2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_size": "21470642176",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "name": "ceph_lv2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "tags": {
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.crush_device_class": "",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.encrypted": "0",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osd_id": "2",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:                "ceph.vdo": "0"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            },
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "type": "block",
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:            "vg_name": "ceph_vg2"
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:        }
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]:    ]
Nov 22 00:27:13 np0005531754 hopeful_kirch[103387]: }
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 00:27:13 np0005531754 systemd[1]: libpod-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope: Deactivated successfully.
Nov 22 00:27:13 np0005531754 podman[103352]: 2025-11-22 05:27:13.70550879 +0000 UTC m=+1.025388208 container died 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:27:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c-merged.mount: Deactivated successfully.
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:13 np0005531754 podman[103352]: 2025-11-22 05:27:13.779285694 +0000 UTC m=+1.099165082 container remove 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:27:13 np0005531754 systemd[1]: libpod-conmon-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope: Deactivated successfully.
Nov 22 00:27:14 np0005531754 python3[103480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.192908809 +0000 UTC m=+0.063691177 container create b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:27:14 np0005531754 systemd[1]: Started libpod-conmon-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope.
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.165765577 +0000 UTC m=+0.036548035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.307137314 +0000 UTC m=+0.177919732 container init b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.319012332 +0000 UTC m=+0.189794740 container start b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.322847068 +0000 UTC m=+0.193629476 container attach b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 00:27:14 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv[101834]: 2025-11-22T05:27:14.532+0000 7f102e508940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 22 00:27:14 np0005531754 radosgw[101838]: LDAP not started since no server URIs were provided in the configuration.
Nov 22 00:27:14 np0005531754 radosgw[101838]: framework: beast
Nov 22 00:27:14 np0005531754 radosgw[101838]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 22 00:27:14 np0005531754 radosgw[101838]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 22 00:27:14 np0005531754 radosgw[101838]: starting handler: beast
Nov 22 00:27:14 np0005531754 radosgw[101838]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 00:27:14 np0005531754 radosgw[101838]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pzxxqv,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3481acdc-caf7-460d-ae73-20f679a0fd37,zone_name=default,zonegroup_id=4901858a-2ef2-49a3-9870-8af5774bd334,zonegroup_name=default}
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.627892806 +0000 UTC m=+0.056017004 container create 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:27:14 np0005531754 systemd[1]: Started libpod-conmon-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope.
Nov 22 00:27:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.603585297 +0000 UTC m=+0.031709535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.710761014 +0000 UTC m=+0.138885272 container init 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.716444322 +0000 UTC m=+0.144568520 container start 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:14 np0005531754 flamboyant_ptolemy[104188]: 167 167
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.72166905 +0000 UTC m=+0.149793298 container attach 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:14 np0005531754 systemd[1]: libpod-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope: Deactivated successfully.
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.723021631 +0000 UTC m=+0.151145839 container died 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:27:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-62d5a5b381d3e0bde88a13d927a0cfa8c47ae1a6a748b00459b9e16f5efd1a22-merged.mount: Deactivated successfully.
Nov 22 00:27:14 np0005531754 podman[103636]: 2025-11-22 05:27:14.770797488 +0000 UTC m=+0.198921686 container remove 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:27:14 np0005531754 systemd[1]: libpod-conmon-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope: Deactivated successfully.
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 22 00:27:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862054732' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 00:27:14 np0005531754 modest_mcclintock[103565]: mimic
Nov 22 00:27:14 np0005531754 systemd[1]: libpod-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope: Deactivated successfully.
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.899783696 +0000 UTC m=+0.770566094 container died b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:27:14 np0005531754 podman[104210]: 2025-11-22 05:27:14.914641301 +0000 UTC m=+0.046816847 container create fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855-merged.mount: Deactivated successfully.
Nov 22 00:27:14 np0005531754 systemd[1]: Started libpod-conmon-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope.
Nov 22 00:27:14 np0005531754 podman[103528]: 2025-11-22 05:27:14.959163154 +0000 UTC m=+0.829945522 container remove b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:14 np0005531754 systemd[1]: libpod-conmon-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope: Deactivated successfully.
Nov 22 00:27:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:14 np0005531754 podman[104210]: 2025-11-22 05:27:14.886375623 +0000 UTC m=+0.018551189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:14 np0005531754 podman[104210]: 2025-11-22 05:27:14.993146571 +0000 UTC m=+0.125322137 container init fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 22 00:27:15 np0005531754 podman[104210]: 2025-11-22 05:27:15.000639259 +0000 UTC m=+0.132814815 container start fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:27:15 np0005531754 podman[104210]: 2025-11-22 05:27:15.005868728 +0000 UTC m=+0.138044274 container attach fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:27:15 np0005531754 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 00:27:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 11 KiB/s wr, 300 op/s
Nov 22 00:27:15 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 22 00:27:15 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 22 00:27:16 np0005531754 python3[104286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]: {
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_id": 1,
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "type": "bluestore"
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    },
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_id": 2,
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "type": "bluestore"
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    },
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_id": 0,
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:        "type": "bluestore"
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]:    }
Nov 22 00:27:16 np0005531754 strange_visvesvaraya[104242]: }
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.064495655 +0000 UTC m=+0.043375559 container create 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:27:16 np0005531754 systemd[1]: libpod-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Deactivated successfully.
Nov 22 00:27:16 np0005531754 systemd[1]: libpod-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Consumed 1.090s CPU time.
Nov 22 00:27:16 np0005531754 systemd[1]: Started libpod-conmon-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope.
Nov 22 00:27:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:16 np0005531754 podman[104315]: 2025-11-22 05:27:16.136899767 +0000 UTC m=+0.032707819 container died fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.048585555 +0000 UTC m=+0.027465479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.153086122 +0000 UTC m=+0.131966126 container init 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e-merged.mount: Deactivated successfully.
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.16100049 +0000 UTC m=+0.139880404 container start 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.172946709 +0000 UTC m=+0.151826633 container attach 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:27:16 np0005531754 podman[104315]: 2025-11-22 05:27:16.192854379 +0000 UTC m=+0.088662431 container remove fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:16 np0005531754 systemd[1]: libpod-conmon-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Deactivated successfully.
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:16 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 689fcb5c-c553-4ae5-84ae-4e0a24698b4e does not exist
Nov 22 00:27:16 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 79b24e5b-ec5f-4dff-b617-ec7c5eac3d55 does not exist
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 22 00:27:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828573560' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 00:27:16 np0005531754 frosty_allen[104321]: 
Nov 22 00:27:16 np0005531754 frosty_allen[104321]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 22 00:27:16 np0005531754 systemd[1]: libpod-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope: Deactivated successfully.
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.761859327 +0000 UTC m=+0.740739291 container died 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e-merged.mount: Deactivated successfully.
Nov 22 00:27:16 np0005531754 podman[104298]: 2025-11-22 05:27:16.820620622 +0000 UTC m=+0.799500526 container remove 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:27:16 np0005531754 systemd[1]: libpod-conmon-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope: Deactivated successfully.
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:17 np0005531754 podman[104587]: 2025-11-22 05:27:17.154747745 +0000 UTC m=+0.088883865 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:17 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Nov 22 00:27:17 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:17 np0005531754 podman[104587]: 2025-11-22 05:27:17.279870516 +0000 UTC m=+0.214006546 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 5.7 KiB/s wr, 215 op/s
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:18 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 22 00:27:18 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 22 00:27:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 22 00:27:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:18 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c0cb22cb-47b9-412c-ac8f-5d74bf79bc78 does not exist
Nov 22 00:27:18 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8f79917e-6ef9-4011-be8a-466e9127faeb does not exist
Nov 22 00:27:18 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d11cfbe0-d78c-4ba3-9d0d-39714c350d9a does not exist
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:27:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:27:18 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 22 00:27:18 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 22 00:27:19 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.135985363 +0000 UTC m=+0.035735468 container create fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:27:19 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 22 00:27:19 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 22 00:27:19 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 22 00:27:19 np0005531754 systemd[1]: Started libpod-conmon-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope.
Nov 22 00:27:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.20817279 +0000 UTC m=+0.107922895 container init fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.213690054 +0000 UTC m=+0.113440159 container start fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.119737896 +0000 UTC m=+0.019488051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:19 np0005531754 intelligent_shamir[105034]: 167 167
Nov 22 00:27:19 np0005531754 systemd[1]: libpod-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope: Deactivated successfully.
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.221344597 +0000 UTC m=+0.121094782 container attach fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.221974331 +0000 UTC m=+0.121724466 container died fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-feccd5690fa1d76c01dde8e3328b5ea2772a5a108d57ddc06298daefe9cb74d8-merged.mount: Deactivated successfully.
Nov 22 00:27:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:27:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:27:19 np0005531754 podman[105019]: 2025-11-22 05:27:19.272086751 +0000 UTC m=+0.171836856 container remove fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:27:19 np0005531754 systemd[1]: libpod-conmon-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope: Deactivated successfully.
Nov 22 00:27:19 np0005531754 podman[105061]: 2025-11-22 05:27:19.411608826 +0000 UTC m=+0.042259664 container create ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 00:27:19 np0005531754 systemd[1]: Started libpod-conmon-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope.
Nov 22 00:27:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:19 np0005531754 podman[105061]: 2025-11-22 05:27:19.485355499 +0000 UTC m=+0.116006347 container init ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:27:19 np0005531754 podman[105061]: 2025-11-22 05:27:19.390451199 +0000 UTC m=+0.021102057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:19 np0005531754 podman[105061]: 2025-11-22 05:27:19.490496975 +0000 UTC m=+0.121147823 container start ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:27:19 np0005531754 podman[105061]: 2025-11-22 05:27:19.494120426 +0000 UTC m=+0.124771274 container attach ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:27:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 175 op/s
Nov 22 00:27:19 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 22 00:27:19 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 22 00:27:20 np0005531754 affectionate_joliot[105077]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:27:20 np0005531754 affectionate_joliot[105077]: --> relative data size: 1.0
Nov 22 00:27:20 np0005531754 affectionate_joliot[105077]: --> All data devices are unavailable
Nov 22 00:27:20 np0005531754 systemd[1]: libpod-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope: Deactivated successfully.
Nov 22 00:27:20 np0005531754 podman[105061]: 2025-11-22 05:27:20.481046857 +0000 UTC m=+1.111697705 container died ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d-merged.mount: Deactivated successfully.
Nov 22 00:27:20 np0005531754 podman[105061]: 2025-11-22 05:27:20.553717205 +0000 UTC m=+1.184368053 container remove ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:27:20 np0005531754 systemd[1]: libpod-conmon-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope: Deactivated successfully.
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.171036703 +0000 UTC m=+0.048695919 container create 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:27:21 np0005531754 systemd[1]: Started libpod-conmon-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope.
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.151704997 +0000 UTC m=+0.029364213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.268145192 +0000 UTC m=+0.145804398 container init 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.276134632 +0000 UTC m=+0.153793858 container start 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:21 np0005531754 happy_leavitt[105277]: 167 167
Nov 22 00:27:21 np0005531754 systemd[1]: libpod-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope: Deactivated successfully.
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.283488628 +0000 UTC m=+0.161147854 container attach 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.284417119 +0000 UTC m=+0.162076305 container died 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:27:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-2c9e6d19a5c97651394a5561e90020bc9295f79a31a0aac6076ac8f5c9bcab51-merged.mount: Deactivated successfully.
Nov 22 00:27:21 np0005531754 podman[105261]: 2025-11-22 05:27:21.328227497 +0000 UTC m=+0.205886663 container remove 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:27:21 np0005531754 systemd[1]: libpod-conmon-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope: Deactivated successfully.
Nov 22 00:27:21 np0005531754 podman[105301]: 2025-11-22 05:27:21.543687214 +0000 UTC m=+0.069327973 container create cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:21 np0005531754 systemd[1]: Started libpod-conmon-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope.
Nov 22 00:27:21 np0005531754 podman[105301]: 2025-11-22 05:27:21.51642272 +0000 UTC m=+0.042063529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:21 np0005531754 podman[105301]: 2025-11-22 05:27:21.633343126 +0000 UTC m=+0.158983935 container init cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:27:21 np0005531754 podman[105301]: 2025-11-22 05:27:21.641856908 +0000 UTC m=+0.167497617 container start cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:21 np0005531754 podman[105301]: 2025-11-22 05:27:21.645340707 +0000 UTC m=+0.170981516 container attach cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 00:27:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 190 op/s
Nov 22 00:27:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:22 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Nov 22 00:27:22 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]: {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    "0": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "devices": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "/dev/loop3"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            ],
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_name": "ceph_lv0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_size": "21470642176",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "name": "ceph_lv0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "tags": {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.crush_device_class": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.encrypted": "0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_id": "0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.vdo": "0"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            },
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "vg_name": "ceph_vg0"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        }
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    ],
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    "1": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "devices": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "/dev/loop4"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            ],
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_name": "ceph_lv1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_size": "21470642176",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "name": "ceph_lv1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "tags": {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.crush_device_class": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.encrypted": "0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_id": "1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.vdo": "0"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            },
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "vg_name": "ceph_vg1"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        }
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    ],
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    "2": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "devices": [
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "/dev/loop5"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            ],
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_name": "ceph_lv2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_size": "21470642176",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "name": "ceph_lv2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "tags": {
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.cluster_name": "ceph",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.crush_device_class": "",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.encrypted": "0",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osd_id": "2",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:                "ceph.vdo": "0"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            },
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "type": "block",
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:            "vg_name": "ceph_vg2"
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:        }
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]:    ]
Nov 22 00:27:22 np0005531754 reverent_solomon[105318]: }
Nov 22 00:27:22 np0005531754 systemd[1]: libpod-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope: Deactivated successfully.
Nov 22 00:27:22 np0005531754 podman[105301]: 2025-11-22 05:27:22.452046194 +0000 UTC m=+0.977686933 container died cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:27:22 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf-merged.mount: Deactivated successfully.
Nov 22 00:27:22 np0005531754 podman[105301]: 2025-11-22 05:27:22.529219794 +0000 UTC m=+1.054860523 container remove cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:27:22 np0005531754 systemd[1]: libpod-conmon-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope: Deactivated successfully.
Nov 22 00:27:22 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 22 00:27:22 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 22 00:27:23 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 22 00:27:23 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.312863851 +0000 UTC m=+0.044996045 container create 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:23 np0005531754 systemd[1]: Started libpod-conmon-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope.
Nov 22 00:27:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.29329208 +0000 UTC m=+0.025424314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.397275954 +0000 UTC m=+0.129408238 container init 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.409526371 +0000 UTC m=+0.141658565 container start 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:27:23 np0005531754 kind_sutherland[105494]: 167 167
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.413234264 +0000 UTC m=+0.145366558 container attach 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:27:23 np0005531754 systemd[1]: libpod-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope: Deactivated successfully.
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.413855808 +0000 UTC m=+0.145988012 container died 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:27:23 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e691f398e26146b65ed9a7a134ae70bd6078277235fd48ff9f27861e14e6923c-merged.mount: Deactivated successfully.
Nov 22 00:27:23 np0005531754 podman[105478]: 2025-11-22 05:27:23.458700929 +0000 UTC m=+0.190833153 container remove 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:27:23 np0005531754 systemd[1]: libpod-conmon-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope: Deactivated successfully.
Nov 22 00:27:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 158 op/s
Nov 22 00:27:23 np0005531754 podman[105517]: 2025-11-22 05:27:23.696397398 +0000 UTC m=+0.059246057 container create 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:27:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 22 00:27:23 np0005531754 systemd[1]: Started libpod-conmon-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope.
Nov 22 00:27:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 22 00:27:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:23 np0005531754 podman[105517]: 2025-11-22 05:27:23.674070615 +0000 UTC m=+0.036919304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:27:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:23 np0005531754 podman[105517]: 2025-11-22 05:27:23.78119095 +0000 UTC m=+0.144039619 container init 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:23 np0005531754 podman[105517]: 2025-11-22 05:27:23.794456069 +0000 UTC m=+0.157304718 container start 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:27:23 np0005531754 podman[105517]: 2025-11-22 05:27:23.798544261 +0000 UTC m=+0.161393000 container attach 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]: {
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_id": 1,
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "type": "bluestore"
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    },
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_id": 2,
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "type": "bluestore"
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    },
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_id": 0,
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:        "type": "bluestore"
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]:    }
Nov 22 00:27:24 np0005531754 sleepy_satoshi[105534]: }
Nov 22 00:27:24 np0005531754 systemd[1]: libpod-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Deactivated successfully.
Nov 22 00:27:24 np0005531754 podman[105517]: 2025-11-22 05:27:24.894007369 +0000 UTC m=+1.256856038 container died 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:27:24 np0005531754 systemd[1]: libpod-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Consumed 1.107s CPU time.
Nov 22 00:27:24 np0005531754 systemd[1]: var-lib-containers-storage-overlay-45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5-merged.mount: Deactivated successfully.
Nov 22 00:27:24 np0005531754 podman[105517]: 2025-11-22 05:27:24.961593882 +0000 UTC m=+1.324442561 container remove 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:24 np0005531754 systemd[1]: libpod-conmon-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Deactivated successfully.
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5d1eda3a-68d6-4172-b1bc-7e9176ddc661 does not exist
Nov 22 00:27:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3e2a0a41-7236-4adb-8c8a-429a6fff16ea does not exist
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3 KiB/s wr, 139 op/s
Nov 22 00:27:26 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 22 00:27:26 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 22 00:27:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:27 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 22 00:27:27 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 22 00:27:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 00:27:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 22 00:27:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 22 00:27:29 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 22 00:27:29 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 22 00:27:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 00:27:30 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 22 00:27:30 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 22 00:27:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 22 00:27:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 22 00:27:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 00:27:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 22 00:27:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 22 00:27:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 22 00:27:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 22 00:27:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:33 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 22 00:27:33 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 22 00:27:34 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 22 00:27:35 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 22 00:27:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:35 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 22 00:27:36 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 22 00:27:36 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 22 00:27:36 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 22 00:27:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 22 00:27:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 22 00:27:38 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 22 00:27:38 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 22 00:27:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 22 00:27:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 22 00:27:39 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 22 00:27:39 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 22 00:27:40 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Nov 22 00:27:40 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Nov 22 00:27:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 22 00:27:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 22 00:27:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 22 00:27:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:27:43
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:27:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:27:44 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Nov 22 00:27:44 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Nov 22 00:27:44 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 22 00:27:44 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 22 00:27:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:45 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 22 00:27:45 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 22 00:27:46 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 22 00:27:46 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 22 00:27:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Nov 22 00:27:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Nov 22 00:27:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:47 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 22 00:27:47 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 22 00:27:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:47 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 22 00:27:47 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 22 00:27:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 22 00:27:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:27:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:49 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 22 00:27:49 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 22 00:27:50 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:50 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 22 00:27:50 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.799673080s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 135.846664429s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.799673080s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 135.846664429s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 22 00:27:51 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:27:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:51 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 22 00:27:51 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] update: starting ev 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] complete: finished ev 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 00:27:52 np0005531754 ceph-mgr[76134]: [progress INFO root] Completed event 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 59 pg[9.0( v 55'578 (0'0,55'578] local-lis/les=49/50 n=209 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.972432137s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 55'577 active pruub 137.870498657s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=13.982721329s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 52'15 active pruub 133.779541016s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=13.982721329s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 unknown pruub 133.779541016s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 59 pg[9.0( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.972432137s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 0'0 unknown pruub 137.870498657s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.15( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.14( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.17( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.16( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.11( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.3( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.2( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.c( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.d( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.b( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.f( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.9( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.a( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.e( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.8( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.6( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.7( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.4( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1a( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.5( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.18( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.19( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1e( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1f( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1c( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1d( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.12( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.13( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1b( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.10( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.14( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.0( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.2( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.a( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.4( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1a( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.12( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.10( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v146: 290 pgs: 93 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:53 np0005531754 ceph-mgr[76134]: [progress INFO root] Writing back 16 completed events
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 00:27:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 22 00:27:54 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.728989601s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 141.913238525s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 00:27:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:27:54 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.728989601s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown pruub 141.913238525s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 22 00:27:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 00:27:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 22 00:27:55 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.0( empty local-lis/les=61/62 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:27:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:56 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 22 00:27:56 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 22 00:27:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:27:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 22 00:27:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 22 00:27:59 np0005531754 python3[105654]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.595335232 +0000 UTC m=+0.042113491 container create 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:59 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 22 00:27:59 np0005531754 systemd[1]: Started libpod-conmon-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope.
Nov 22 00:27:59 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 22 00:27:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:27:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.577121463 +0000 UTC m=+0.023899762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.681047702 +0000 UTC m=+0.127826011 container init 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.689270072 +0000 UTC m=+0.136048331 container start 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.693561838 +0000 UTC m=+0.140340097 container attach 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:27:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:27:59 np0005531754 sharp_raman[105670]: could not fetch user info: no user info saved
Nov 22 00:27:59 np0005531754 systemd[1]: libpod-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope: Deactivated successfully.
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.889860895 +0000 UTC m=+0.336639154 container died 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:27:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4-merged.mount: Deactivated successfully.
Nov 22 00:27:59 np0005531754 podman[105655]: 2025-11-22 05:27:59.936228139 +0000 UTC m=+0.383006398 container remove 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:27:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 22 00:27:59 np0005531754 systemd[1]: libpod-conmon-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope: Deactivated successfully.
Nov 22 00:27:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 22 00:28:00 np0005531754 python3[105791]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.320045428 +0000 UTC m=+0.055012058 container create 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:00 np0005531754 systemd[1]: Started libpod-conmon-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope.
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.291336247 +0000 UTC m=+0.026302967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 00:28:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.409641992 +0000 UTC m=+0.144608692 container init 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.416757223 +0000 UTC m=+0.151723853 container start 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.420952166 +0000 UTC m=+0.155918886 container attach 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:28:00 np0005531754 zen_gates[105808]: {
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "user_id": "openstack",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "display_name": "openstack",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "email": "",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "suspended": 0,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "max_buckets": 1000,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "subusers": [],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "keys": [
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        {
Nov 22 00:28:00 np0005531754 zen_gates[105808]:            "user": "openstack",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:            "access_key": "45SSD5RELPBSE4WQZF6L",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:            "secret_key": "VDGYg3Z5z1Bs0wbo3i7bLtzN8JHYmT4RogDbSS31"
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        }
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    ],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "swift_keys": [],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "caps": [],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "op_mask": "read, write, delete",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "default_placement": "",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "default_storage_class": "",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "placement_tags": [],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "bucket_quota": {
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "enabled": false,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "check_on_raw": false,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_size": -1,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_size_kb": 0,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_objects": -1
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    },
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "user_quota": {
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "enabled": false,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "check_on_raw": false,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_size": -1,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_size_kb": 0,
Nov 22 00:28:00 np0005531754 zen_gates[105808]:        "max_objects": -1
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    },
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "temp_url_keys": [],
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "type": "rgw",
Nov 22 00:28:00 np0005531754 zen_gates[105808]:    "mfa_ids": []
Nov 22 00:28:00 np0005531754 zen_gates[105808]: }
Nov 22 00:28:00 np0005531754 zen_gates[105808]: 
Nov 22 00:28:00 np0005531754 systemd[1]: libpod-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope: Deactivated successfully.
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.64959847 +0000 UTC m=+0.384565100 container died 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa-merged.mount: Deactivated successfully.
Nov 22 00:28:00 np0005531754 podman[105792]: 2025-11-22 05:28:00.686616504 +0000 UTC m=+0.421583134 container remove 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:28:00 np0005531754 systemd[1]: libpod-conmon-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope: Deactivated successfully.
Nov 22 00:28:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 22 00:28:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 22 00:28:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 0 op/s
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.859148026s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.079818726s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864239693s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.084945679s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864306450s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085037231s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.858970642s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.079696655s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.859076500s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.079818726s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864220619s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085037231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864105225s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.084945679s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864074707s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085067749s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864051819s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085067749s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864281654s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085372925s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864239693s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085372925s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864126205s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085403442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864109039s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085357666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864109993s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085403442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864319801s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085678101s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864022255s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085357666s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864291191s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085678101s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864405632s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085906982s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.858904839s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.079696655s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864633560s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086242676s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864316940s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086013794s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864212036s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085922241s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864229202s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086013794s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864137650s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085922241s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864124298s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085922241s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864116669s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085906982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863928795s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085922241s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863899231s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086059570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863842010s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086059570s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863701820s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086029053s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864584923s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086242676s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863669395s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086029053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863524437s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086105347s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863456726s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086090088s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863484383s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086105347s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863412857s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086090088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863309860s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086227417s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863193512s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086135864s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863249779s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086227417s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863136292s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086135864s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863255501s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086120605s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863077164s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.085968018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.862845421s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086120605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.862667084s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.085968018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.873176575s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.214599609s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845706940s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.187194824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.873139381s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.214599609s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845680237s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.187194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878857613s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220581055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845470428s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.187225342s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878817558s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220581055s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845444679s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.187225342s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878869057s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220687866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818603516s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160446167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878838539s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220687866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818561554s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160446167s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818313599s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160354614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852367401s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194427490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852294922s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194427490s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878567696s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220718384s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818254471s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160354614s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878544807s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220718384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878473282s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220748901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878455162s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220748901s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852027893s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194442749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851988792s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194442749s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878264427s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220825195s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817679405s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160263062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817655563s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160263062s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878222466s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220825195s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852129936s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194763184s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852100372s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194763184s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878142357s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220886230s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817591667s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160369873s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817516327s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160293579s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878114700s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220886230s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817567825s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160369873s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817477226s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160293579s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877993584s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220825195s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877963066s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220825195s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817594528s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160339355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851891518s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194824219s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817395210s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160339355s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878018379s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221008301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851870537s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194824219s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877997398s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221008301s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817250252s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160293579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817233086s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160293579s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877840042s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 144.220977783s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851682663s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194824219s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877781868s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.220977783s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851609230s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194824219s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851533890s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194778442s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851512909s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194778442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816669464s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160003662s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877687454s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221023560s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877670288s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221023560s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816639900s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160003662s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816723824s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160171509s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816693306s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160171509s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816498756s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160018921s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877540588s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221084595s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877521515s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221084595s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851271629s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194961548s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851255417s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194961548s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877268791s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221054077s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877254486s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221054077s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816029549s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159942627s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816016197s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159942627s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877111435s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877096176s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850822449s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194885254s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850746155s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194885254s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815528870s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159805298s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850737572s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195037842s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815503120s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159805298s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876886368s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221221924s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850709915s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195037842s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876855850s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815890312s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160385132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815864563s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160385132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876600266s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221237183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850482941s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195144653s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850457191s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195144653s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876572609s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221237183s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876490593s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221221924s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876454353s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.814990044s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159805298s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.814963341s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159805298s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815032959s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160049438s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815011978s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160049438s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876410484s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221252441s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876093864s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221313477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876065254s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221252441s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876073837s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221313477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813920021s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159286499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849807739s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195175171s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813898087s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159286499s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875850677s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221328735s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849752426s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195175171s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875827789s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221328735s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849766731s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195312500s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849741936s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816480637s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160018921s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875667572s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221343994s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875649452s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221343994s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875610352s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221359253s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812859535s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159240723s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875589371s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221359253s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812825203s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159240723s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.807912827s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.154525757s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874752045s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221389771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.807893753s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.154525757s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874724388s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221389771s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813027382s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159835815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848503113s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195327759s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813008308s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159835815s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848477364s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195327759s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874420166s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221359253s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874378204s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221359253s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848275185s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195404053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848199844s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195404053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812411308s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159774780s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812382698s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159774780s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 22 00:28:01 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 22 00:28:01 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1a( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.15( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.9( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.d( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.12( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.11( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1c( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.18( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.9( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.8( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.d( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.2( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.15( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.3( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.14( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.e( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.14( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.6( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.10( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 00:28:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:28:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 22 00:28:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 22 00:28:03 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 22 00:28:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 00:28:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 00:28:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 22 00:28:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 00:28:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 22 00:28:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 00:28:04 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990532875s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.639038086s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990456581s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.639038086s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989582062s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638519287s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990011215s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989490509s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638519287s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989901543s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989019394s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638381958s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988957405s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638381958s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989553452s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638580322s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988526344s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638214111s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988478661s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638214111s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988895416s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638687134s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988797188s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638687134s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988625526s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638580322s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988069534s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638092041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.987929344s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638092041s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988601685s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638885498s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988614082s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638900757s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988554955s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638885498s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988540649s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638900757s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989434242s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988452911s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988288879s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.979233742s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.630081177s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988225937s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.979169846s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.630081177s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988011360s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.639053345s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.986865044s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638015747s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.987811089s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.639053345s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.986662865s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638015747s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 22 00:28:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 00:28:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 22 00:28:05 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 22 00:28:05 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=13.975119591s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638442993s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:05 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=13.975032806s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638442993s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 260 B/s wr, 0 op/s; 788 B/s, 25 objects/s recovering
Nov 22 00:28:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 22 00:28:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 22 00:28:06 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 22 00:28:06 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 68 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 22 00:28:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 22 00:28:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 441 B/s wr, 0 op/s; 746 B/s, 21 objects/s recovering
Nov 22 00:28:07 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 22 00:28:07 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 22 00:28:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 0 op/s; 576 B/s, 16 objects/s recovering
Nov 22 00:28:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 134 B/s rd, 268 B/s wr, 0 op/s; 468 B/s, 14 objects/s recovering
Nov 22 00:28:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 00:28:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 22 00:28:12 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 22 00:28:12 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 22 00:28:12 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 22 00:28:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s; 59 B/s, 1 objects/s recovering
Nov 22 00:28:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 00:28:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 22 00:28:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 00:28:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 22 00:28:14 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 22 00:28:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 00:28:14 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 22 00:28:14 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 22 00:28:14 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 22 00:28:14 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 22 00:28:14 np0005531754 systemd-logind[798]: New session 33 of user zuul.
Nov 22 00:28:14 np0005531754 systemd[1]: Started Session 33 of User zuul.
Nov 22 00:28:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 00:28:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 22 00:28:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 00:28:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 00:28:15 np0005531754 python3.9[106058]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:28:15 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 22 00:28:15 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 22 00:28:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 22 00:28:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 00:28:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 22 00:28:16 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 22 00:28:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 00:28:16 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 22 00:28:16 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 22 00:28:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 00:28:17 np0005531754 python3.9[106276]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:28:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 00:28:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 00:28:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 22 00:28:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 00:28:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 22 00:28:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 22 00:28:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 00:28:18 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 22 00:28:18 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456052780s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.194747925s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456364632s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195281982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456313133s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456233025s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195312500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456089020s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455579758s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.194747925s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455844879s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195404053s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455754280s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195404053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 00:28:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Nov 22 00:28:19 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Nov 22 00:28:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 22 00:28:20 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 00:28:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 00:28:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 22 00:28:20 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 22 00:28:20 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:20 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:20 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:20 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:20 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 22 00:28:20 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 22 00:28:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 22 00:28:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.830095291s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.121322632s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829577446s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.121124268s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829159737s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.120864868s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829389572s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.121124268s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829584122s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.121322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829086304s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.120864868s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.827204704s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.120880127s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.826862335s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.120880127s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 22 00:28:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979733467s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.857635498s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.982757568s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.860687256s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.982709885s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.860687256s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979663849s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.857635498s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979160309s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.857666016s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979077339s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.857666016s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.976745605s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.855407715s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.976223946s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.855407715s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 22 00:28:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 22 00:28:21 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 22 00:28:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 22 00:28:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 22 00:28:22 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=75/76 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:22 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 22 00:28:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 22 00:28:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 22 00:28:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 00:28:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 22 00:28:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 22 00:28:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434611320s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.791946411s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434338570s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.791931152s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434274673s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.791931152s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.439762115s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.797256470s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438985825s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.797241211s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438868523s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.797241211s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.433600426s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.791946411s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438651085s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.797256470s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 22 00:28:24 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 22 00:28:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 22 00:28:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 22 00:28:25 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 22 00:28:25 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:25 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:25 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:25 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:25 np0005531754 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Nov 22 00:28:25 np0005531754 systemd[1]: session-33.scope: Deactivated successfully.
Nov 22 00:28:25 np0005531754 systemd[1]: session-33.scope: Consumed 8.545s CPU time.
Nov 22 00:28:25 np0005531754 systemd-logind[798]: Removed session 33.
Nov 22 00:28:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:26 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 10f554f2-6f32-4998-99f2-24393e580269 does not exist
Nov 22 00:28:26 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 356a2ee5-a32b-4d4b-aa48-0cfd1d97143d does not exist
Nov 22 00:28:26 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8e82d58e-75f3-4df9-8731-a15f83e7944b does not exist
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:28:26 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 22 00:28:26 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 22 00:28:26 np0005531754 podman[106605]: 2025-11-22 05:28:26.922073577 +0000 UTC m=+0.062908649 container create 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:26 np0005531754 systemd[1]: Started libpod-conmon-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope.
Nov 22 00:28:26 np0005531754 podman[106605]: 2025-11-22 05:28:26.890008896 +0000 UTC m=+0.030843998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:27 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:27 np0005531754 podman[106605]: 2025-11-22 05:28:27.025210515 +0000 UTC m=+0.166045667 container init 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:27 np0005531754 podman[106605]: 2025-11-22 05:28:27.037808253 +0000 UTC m=+0.178643315 container start 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:27 np0005531754 podman[106605]: 2025-11-22 05:28:27.042510149 +0000 UTC m=+0.183345271 container attach 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:28:27 np0005531754 stoic_rosalind[106621]: 167 167
Nov 22 00:28:27 np0005531754 systemd[1]: libpod-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope: Deactivated successfully.
Nov 22 00:28:27 np0005531754 podman[106605]: 2025-11-22 05:28:27.046025693 +0000 UTC m=+0.186860755 container died 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:28:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:27 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4de9313a2c58113291b9deb7115c838f1c0bcb4f6d05225bf9854950e94ebcde-merged.mount: Deactivated successfully.
Nov 22 00:28:27 np0005531754 podman[106605]: 2025-11-22 05:28:27.101089661 +0000 UTC m=+0.241924723 container remove 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:28:27 np0005531754 systemd[1]: libpod-conmon-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope: Deactivated successfully.
Nov 22 00:28:27 np0005531754 podman[106643]: 2025-11-22 05:28:27.312134904 +0000 UTC m=+0.046498859 container create 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:28:27 np0005531754 systemd[1]: Started libpod-conmon-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope.
Nov 22 00:28:27 np0005531754 podman[106643]: 2025-11-22 05:28:27.291158191 +0000 UTC m=+0.025522176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:27 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:27 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:27 np0005531754 podman[106643]: 2025-11-22 05:28:27.431720952 +0000 UTC m=+0.166084977 container init 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:28:27 np0005531754 podman[106643]: 2025-11-22 05:28:27.44841524 +0000 UTC m=+0.182779215 container start 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:28:27 np0005531754 podman[106643]: 2025-11-22 05:28:27.452157901 +0000 UTC m=+0.186521906 container attach 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:28:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 725 B/s wr, 35 op/s; 194 B/s, 8 objects/s recovering
Nov 22 00:28:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 00:28:27 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 00:28:27 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 22 00:28:27 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 22 00:28:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 22 00:28:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 22 00:28:28 np0005531754 hungry_edison[106659]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:28:28 np0005531754 hungry_edison[106659]: --> relative data size: 1.0
Nov 22 00:28:28 np0005531754 hungry_edison[106659]: --> All data devices are unavailable
Nov 22 00:28:28 np0005531754 systemd[1]: libpod-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Deactivated successfully.
Nov 22 00:28:28 np0005531754 systemd[1]: libpod-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Consumed 1.004s CPU time.
Nov 22 00:28:28 np0005531754 podman[106643]: 2025-11-22 05:28:28.503774938 +0000 UTC m=+1.238138873 container died 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:28 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57-merged.mount: Deactivated successfully.
Nov 22 00:28:28 np0005531754 podman[106643]: 2025-11-22 05:28:28.590437424 +0000 UTC m=+1.324801359 container remove 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:28:28 np0005531754 systemd[1]: libpod-conmon-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Deactivated successfully.
Nov 22 00:28:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 22 00:28:28 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 00:28:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 00:28:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 22 00:28:28 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 22 00:28:28 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 22 00:28:28 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 22 00:28:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 22 00:28:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.351384251 +0000 UTC m=+0.042035909 container create cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:28:29 np0005531754 systemd[1]: Started libpod-conmon-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope.
Nov 22 00:28:29 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.334618731 +0000 UTC m=+0.025270399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.443432321 +0000 UTC m=+0.134084059 container init cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.456030029 +0000 UTC m=+0.146681687 container start cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.461699581 +0000 UTC m=+0.152351319 container attach cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:28:29 np0005531754 vigilant_ishizaka[106857]: 167 167
Nov 22 00:28:29 np0005531754 systemd[1]: libpod-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope: Deactivated successfully.
Nov 22 00:28:29 np0005531754 conmon[106857]: conmon cc1c4f0da7f0944371ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope/container/memory.events
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.46384839 +0000 UTC m=+0.154500068 container died cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 00:28:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-678af39e56e29a5fac3bbe15f45436e4d6fbfb7c58fdeb685d2bb9cef52c10df-merged.mount: Deactivated successfully.
Nov 22 00:28:29 np0005531754 podman[106841]: 2025-11-22 05:28:29.512163306 +0000 UTC m=+0.202814984 container remove cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:28:29 np0005531754 systemd[1]: libpod-conmon-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope: Deactivated successfully.
Nov 22 00:28:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 00:28:29 np0005531754 podman[106878]: 2025-11-22 05:28:29.686091352 +0000 UTC m=+0.045010478 container create 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:28:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 32 op/s; 183 B/s, 7 objects/s recovering
Nov 22 00:28:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 00:28:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 00:28:29 np0005531754 systemd[1]: Started libpod-conmon-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope.
Nov 22 00:28:29 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 22 00:28:29 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 22 00:28:29 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:29 np0005531754 podman[106878]: 2025-11-22 05:28:29.668947952 +0000 UTC m=+0.027867078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:29 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:29 np0005531754 podman[106878]: 2025-11-22 05:28:29.776128579 +0000 UTC m=+0.135047735 container init 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:28:29 np0005531754 podman[106878]: 2025-11-22 05:28:29.788095479 +0000 UTC m=+0.147014625 container start 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:29 np0005531754 podman[106878]: 2025-11-22 05:28:29.793053583 +0000 UTC m=+0.151972779 container attach 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:28:30 np0005531754 kind_darwin[106895]: {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    "0": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "devices": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "/dev/loop3"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            ],
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_name": "ceph_lv0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_size": "21470642176",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "name": "ceph_lv0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "tags": {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_name": "ceph",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.crush_device_class": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.encrypted": "0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_id": "0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.vdo": "0"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            },
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "vg_name": "ceph_vg0"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        }
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    ],
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    "1": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "devices": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "/dev/loop4"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            ],
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_name": "ceph_lv1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_size": "21470642176",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "name": "ceph_lv1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "tags": {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_name": "ceph",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.crush_device_class": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.encrypted": "0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_id": "1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.vdo": "0"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            },
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "vg_name": "ceph_vg1"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        }
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    ],
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    "2": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "devices": [
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "/dev/loop5"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            ],
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_name": "ceph_lv2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_size": "21470642176",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "name": "ceph_lv2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "tags": {
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.cluster_name": "ceph",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.crush_device_class": "",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.encrypted": "0",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osd_id": "2",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:                "ceph.vdo": "0"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            },
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "type": "block",
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:            "vg_name": "ceph_vg2"
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:        }
Nov 22 00:28:30 np0005531754 kind_darwin[106895]:    ]
Nov 22 00:28:30 np0005531754 kind_darwin[106895]: }
Nov 22 00:28:30 np0005531754 systemd[1]: libpod-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope: Deactivated successfully.
Nov 22 00:28:30 np0005531754 conmon[106895]: conmon 0d22ebd2753127d44570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope/container/memory.events
Nov 22 00:28:30 np0005531754 podman[106878]: 2025-11-22 05:28:30.579154135 +0000 UTC m=+0.938073331 container died 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048869133s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 174.195495605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048822403s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.195495605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048466682s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 174.195465088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048411369s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.195465088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:30 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380-merged.mount: Deactivated successfully.
Nov 22 00:28:30 np0005531754 podman[106878]: 2025-11-22 05:28:30.644227242 +0000 UTC m=+1.003146358 container remove 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:28:30 np0005531754 systemd[1]: libpod-conmon-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope: Deactivated successfully.
Nov 22 00:28:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 22 00:28:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 00:28:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 00:28:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 22 00:28:30 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.42007865 +0000 UTC m=+0.062878988 container create cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:28:31 np0005531754 systemd[1]: Started libpod-conmon-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope.
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.392321955 +0000 UTC m=+0.035122353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.513411184 +0000 UTC m=+0.156211532 container init cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.520573467 +0000 UTC m=+0.163373805 container start cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.524563793 +0000 UTC m=+0.167364201 container attach cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:28:31 np0005531754 festive_feynman[107074]: 167 167
Nov 22 00:28:31 np0005531754 systemd[1]: libpod-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope: Deactivated successfully.
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.527903723 +0000 UTC m=+0.170704061 container died cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6aaeb76da7b80f8ad64f3849e27b55f20b33c73ac1799d76f2b3a5ab1e7631a6-merged.mount: Deactivated successfully.
Nov 22 00:28:31 np0005531754 podman[107058]: 2025-11-22 05:28:31.579274761 +0000 UTC m=+0.222075089 container remove cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:28:31 np0005531754 systemd[1]: libpod-conmon-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope: Deactivated successfully.
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 00:28:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 485 B/s wr, 31 op/s; 173 B/s, 7 objects/s recovering
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 22 00:28:31 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 22 00:28:31 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 82 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:31 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 82 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:31 np0005531754 podman[107099]: 2025-11-22 05:28:31.821702057 +0000 UTC m=+0.067334808 container create 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:28:31 np0005531754 systemd[1]: Started libpod-conmon-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope.
Nov 22 00:28:31 np0005531754 podman[107099]: 2025-11-22 05:28:31.792457952 +0000 UTC m=+0.038090743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:28:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:28:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:28:31 np0005531754 podman[107099]: 2025-11-22 05:28:31.932538481 +0000 UTC m=+0.178171222 container init 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:28:31 np0005531754 podman[107099]: 2025-11-22 05:28:31.947309357 +0000 UTC m=+0.192942098 container start 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:28:31 np0005531754 podman[107099]: 2025-11-22 05:28:31.951499559 +0000 UTC m=+0.197132290 container attach 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:28:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 22 00:28:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.736738205s) [2] async=[2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 180.362548828s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.736606598s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.362548828s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.732250214s) [2] async=[2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 180.358383179s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.732167244s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.358383179s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 00:28:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 00:28:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 22 00:28:32 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 22 00:28:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 22 00:28:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]: {
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_id": 1,
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "type": "bluestore"
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    },
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_id": 2,
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "type": "bluestore"
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    },
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_id": 0,
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:        "type": "bluestore"
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]:    }
Nov 22 00:28:33 np0005531754 eloquent_feistel[107115]: }
Nov 22 00:28:33 np0005531754 systemd[1]: libpod-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Deactivated successfully.
Nov 22 00:28:33 np0005531754 systemd[1]: libpod-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Consumed 1.101s CPU time.
Nov 22 00:28:33 np0005531754 podman[107099]: 2025-11-22 05:28:33.043694935 +0000 UTC m=+1.289327676 container died 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 22 00:28:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c-merged.mount: Deactivated successfully.
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 22 00:28:33 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 84 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=83/84 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:33 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 84 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=83/84 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:33 np0005531754 podman[107099]: 2025-11-22 05:28:33.11877826 +0000 UTC m=+1.364410971 container remove 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:28:33 np0005531754 systemd[1]: libpod-conmon-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Deactivated successfully.
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 913c5b84-5e41-4ac0-9590-6038787f0d3d does not exist
Nov 22 00:28:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5116423c-4e8f-4761-94a7-57e9ab8367ce does not exist
Nov 22 00:28:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:28:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 00:28:33 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 22 00:28:33 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 22 00:28:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 22 00:28:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 00:28:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 22 00:28:34 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 22 00:28:34 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 22 00:28:34 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 22 00:28:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 00:28:35 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 22 00:28:35 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 22 00:28:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 00:28:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 00:28:35 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 22 00:28:35 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 22 00:28:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 22 00:28:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 00:28:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 22 00:28:36 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 22 00:28:36 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 00:28:36 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246051788s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 182.195312500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:36 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.245971680s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:36 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246371269s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 182.195816040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:36 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246341705s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.195816040s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:36 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 86 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:36 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 86 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 22 00:28:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 22 00:28:37 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 22 00:28:37 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:37 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:37 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:37 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:37 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 22 00:28:37 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 22 00:28:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 3 objects/s recovering
Nov 22 00:28:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 22 00:28:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 22 00:28:38 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 22 00:28:38 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 22 00:28:38 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 22 00:28:38 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 88 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:38 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 88 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:38 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 22 00:28:38 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 22 00:28:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 22 00:28:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 22 00:28:39 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 22 00:28:39 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:39 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:39 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:39 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.161184311s) [2] async=[2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 186.946456909s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.161073685s) [2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.946456909s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.156772614s) [2] async=[2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 186.942230225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.156560898s) [2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.942230225s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Nov 22 00:28:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Nov 22 00:28:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 22 00:28:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 22 00:28:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 22 00:28:40 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 22 00:28:40 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 90 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:40 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 90 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:40 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 22 00:28:40 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 22 00:28:41 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 22 00:28:41 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 22 00:28:41 np0005531754 systemd-logind[798]: New session 34 of user zuul.
Nov 22 00:28:41 np0005531754 systemd[1]: Started Session 34 of User zuul.
Nov 22 00:28:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Nov 22 00:28:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Nov 22 00:28:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 220 B/s wr, 20 op/s; 23 B/s, 2 objects/s recovering
Nov 22 00:28:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 00:28:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:42 np0005531754 python3.9[107366]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 00:28:42 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 22 00:28:42 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 22 00:28:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 00:28:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 00:28:43 np0005531754 python3.9[107540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:28:43
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.mgr']
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 181 B/s wr, 16 op/s; 19 B/s, 2 objects/s recovering
Nov 22 00:28:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 00:28:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:28:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:28:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 22 00:28:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 22 00:28:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 22 00:28:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 00:28:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 00:28:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 22 00:28:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 22 00:28:44 np0005531754 python3.9[107696]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:28:44 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Nov 22 00:28:44 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Nov 22 00:28:45 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 22 00:28:45 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 22 00:28:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 00:28:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Nov 22 00:28:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 00:28:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 00:28:45 np0005531754 python3.9[107849]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:28:45 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 22 00:28:45 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 22 00:28:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 22 00:28:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 22 00:28:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 22 00:28:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 00:28:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 22 00:28:46 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 22 00:28:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 00:28:46 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 22 00:28:46 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 22 00:28:46 np0005531754 python3.9[108003]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:28:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 00:28:47 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 22 00:28:47 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 22 00:28:47 np0005531754 python3.9[108155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:28:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 22 00:28:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 00:28:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 22 00:28:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 00:28:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 22 00:28:48 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 22 00:28:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 00:28:48 np0005531754 python3.9[108305]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:28:48 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 22 00:28:48 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 22 00:28:48 np0005531754 network[108322]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:28:48 np0005531754 network[108323]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:28:48 np0005531754 network[108324]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:28:48 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 22 00:28:48 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 22 00:28:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 00:28:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 22 00:28:49 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 22 00:28:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 22 00:28:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 00:28:50 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 22 00:28:50 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 22 00:28:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 22 00:28:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 00:28:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 22 00:28:50 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 22 00:28:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 00:28:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 00:28:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 22 00:28:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 00:28:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 22 00:28:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 22 00:28:52 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:28:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:28:53 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 22 00:28:53 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 22 00:28:53 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 00:28:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 22 00:28:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 00:28:54 np0005531754 python3.9[108584]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:28:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 22 00:28:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 00:28:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 00:28:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 22 00:28:54 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 22 00:28:54 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 22 00:28:54 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 22 00:28:55 np0005531754 python3.9[108734]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:28:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 00:28:55 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 22 00:28:55 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 22 00:28:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 22 00:28:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 00:28:56 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 22 00:28:56 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 22 00:28:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 22 00:28:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 00:28:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 22 00:28:56 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 22 00:28:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 00:28:56 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 22 00:28:56 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 22 00:28:56 np0005531754 python3.9[108888]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:28:56 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 22 00:28:56 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 22 00:28:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 97 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=12.134371758s) [2] r=-1 lpr=97 pi=[66,97)/1 crt=55'578 mlcod 0'0 active pruub 207.122039795s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:56 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 98 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=12.134313583s) [2] r=-1 lpr=97 pi=[66,97)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 207.122039795s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:56 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 98 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97) [2] r=0 lpr=98 pi=[66,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 22 00:28:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 99 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:57 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 99 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 99 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:57 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 99 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 22 00:28:57 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 00:28:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 22 00:28:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 00:28:57 np0005531754 python3.9[109046]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:28:57 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 22 00:28:57 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 22 00:28:58 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100 pruub=11.041053772s) [1] r=-1 lpr=100 pi=[66,100)/1 crt=55'578 mlcod 0'0 active pruub 207.121765137s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:58 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100 pruub=11.040739059s) [1] r=-1 lpr=100 pi=[66,100)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 207.121765137s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:58 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100) [1] r=0 lpr=100 pi=[66,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:58 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:28:58 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 22 00:28:58 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 00:28:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 00:28:58 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Nov 22 00:28:58 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Nov 22 00:28:58 np0005531754 python3.9[109130]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:28:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 22 00:28:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 22 00:28:59 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101 pruub=15.007973671s) [2] async=[2] r=-1 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 55'578 active pruub 212.097778320s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101 pruub=15.007735252s) [2] r=-1 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 212.097778320s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:59 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[66,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:59 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[66,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:28:59 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:28:59 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 22 00:28:59 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 22 00:28:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:28:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 22 00:28:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 22 00:29:00 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=9.984745026s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'578 mlcod 0'0 active pruub 196.532760620s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:00 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=9.984657288s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 196.532760620s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:00 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:00 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 00:29:00 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 00:29:00 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 22 00:29:00 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 102 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:00 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 22 00:29:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 22 00:29:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 22 00:29:01 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 22 00:29:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103 pruub=15.462775230s) [1] async=[1] r=-1 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 55'578 active pruub 214.564346313s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103 pruub=15.462686539s) [1] r=-1 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 214.564346313s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:01 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:01 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 103 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:01 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 103 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 00:29:01 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 22 00:29:01 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 22 00:29:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 22 00:29:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 22 00:29:02 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 22 00:29:02 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 104 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:02 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 22 00:29:02 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 22 00:29:02 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 104 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:02 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 22 00:29:02 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 22 00:29:02 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 22 00:29:02 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 22 00:29:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 22 00:29:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 22 00:29:03 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 22 00:29:03 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.042300224s) [0] async=[0] r=-1 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 55'578 active pruub 204.632476807s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:03 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.042198181s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 204.632476807s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:03 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:03 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:03 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 22 00:29:03 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 22 00:29:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 00:29:04 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 22 00:29:04 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 22 00:29:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 22 00:29:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 22 00:29:04 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 22 00:29:04 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 106 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=105/106 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 3 objects/s recovering
Nov 22 00:29:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 22 00:29:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 00:29:05 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 22 00:29:05 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 22 00:29:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 22 00:29:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 00:29:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 00:29:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 22 00:29:06 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 22 00:29:06 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 22 00:29:06 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 22 00:29:06 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 22 00:29:06 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 22 00:29:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 00:29:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 22 00:29:07 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 22 00:29:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 1 objects/s recovering
Nov 22 00:29:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 22 00:29:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 00:29:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 22 00:29:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 00:29:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 22 00:29:08 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 22 00:29:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 00:29:09 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 00:29:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 22 00:29:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 22 00:29:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 00:29:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 22 00:29:10 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 00:29:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 00:29:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 22 00:29:10 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 22 00:29:10 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 109 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=15.884453773s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'578 mlcod 0'0 active pruub 224.134216309s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:10 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 109 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=15.884372711s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 224.134216309s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:10 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 22 00:29:11 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 110 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:11 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 110 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:11 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:11 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 22 00:29:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 22 00:29:12 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 22 00:29:12 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 111 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 22 00:29:13 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:13 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:13 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.364536285s) [2] async=[2] r=-1 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 55'578 active pruub 226.624130249s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:13 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.364456177s) [2] r=-1 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 226.624130249s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 22 00:29:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 22 00:29:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 00:29:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 22 00:29:14 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 22 00:29:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 00:29:14 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 113 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=112/113 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 00:29:15 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 22 00:29:15 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 22 00:29:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 22 00:29:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 00:29:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 22 00:29:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 00:29:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 22 00:29:16 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 22 00:29:16 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 114 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=11.955540657s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=55'578 mlcod 0'0 active pruub 214.696533203s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:16 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 114 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=11.954920769s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 214.696533203s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 00:29:16 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114) [0] r=0 lpr=114 pi=[89,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:16 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 22 00:29:16 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 22 00:29:17 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:17 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:17 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 115 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:17 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 115 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 00:29:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 2 objects/s recovering
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 22 00:29:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 00:29:17 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 22 00:29:17 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 00:29:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 00:29:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 22 00:29:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 22 00:29:18 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 116 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 22 00:29:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 22 00:29:19 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 22 00:29:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.647471428s) [0] async=[0] r=-1 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 55'578 active pruub 221.433456421s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:19 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.646136284s) [0] r=-1 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 221.433456421s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:19 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:19 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:19 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 22 00:29:19 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 22 00:29:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 22 00:29:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 22 00:29:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 00:29:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 22 00:29:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 00:29:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 22 00:29:20 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 00:29:20 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 22 00:29:20 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 118 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=117/118 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:20 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 118 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=13.508271217s) [0] r=-1 lpr=118 pi=[75,118)/1 crt=55'578 mlcod 0'0 active pruub 220.533447266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:20 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 118 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=13.508199692s) [0] r=-1 lpr=118 pi=[75,118)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 220.533447266s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:20 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 118 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118) [0] r=0 lpr=118 pi=[75,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 22 00:29:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 00:29:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 22 00:29:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 22 00:29:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:21 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 119 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:21 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 119 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 22 00:29:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 22 00:29:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 00:29:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 22 00:29:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 22 00:29:22 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 22 00:29:22 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 120 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 22 00:29:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 22 00:29:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 22 00:29:23 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.105906487s) [0] async=[0] r=-1 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 55'578 active pruub 224.958618164s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:23 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.105776787s) [0] r=-1 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 224.958618164s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:23 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 22 00:29:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 22 00:29:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 00:29:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 22 00:29:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 22 00:29:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 22 00:29:24 np0005531754 ceph-osd[89779]: osd.0 pg_epoch: 122 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=121/122 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:25 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 22 00:29:25 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 22 00:29:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 00:29:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 00:29:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:29:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 22 00:29:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 00:29:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:29:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 22 00:29:26 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 22 00:29:26 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 123 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123 pruub=10.947411537s) [1] r=-1 lpr=123 pi=[78,123)/1 crt=55'578 mlcod 0'0 active pruub 223.843734741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:26 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 123 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123 pruub=10.947351456s) [1] r=-1 lpr=123 pi=[78,123)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 223.843734741s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:26 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123) [1] r=0 lpr=123 pi=[78,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 22 00:29:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 22 00:29:27 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 22 00:29:27 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 124 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:27 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 124 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 22 00:29:27 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[78,124)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:27 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[78,124)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 22 00:29:27 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 00:29:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 00:29:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 22 00:29:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 22 00:29:28 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 22 00:29:28 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 125 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] async=[1] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 22 00:29:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 22 00:29:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 22 00:29:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 22 00:29:29 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 22 00:29:29 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126 pruub=15.003382683s) [1] async=[1] r=-1 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 55'578 active pruub 230.555160522s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:29 np0005531754 ceph-osd[91881]: osd.2 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126 pruub=15.003293991s) [1] r=-1 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 230.555160522s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 00:29:29 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 00:29:29 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 00:29:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 22 00:29:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 22 00:29:30 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 22 00:29:30 np0005531754 ceph-osd[90784]: osd.1 pg_epoch: 127 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=126/127 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 00:29:30 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 22 00:29:30 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 22 00:29:31 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Nov 22 00:29:31 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Nov 22 00:29:31 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 22 00:29:31 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 22 00:29:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 22 00:29:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 22 00:29:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 22 00:29:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 0a6a9f28-9990-4d76-b588-b335d3e966d9 does not exist
Nov 22 00:29:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ec831de7-02dc-4eb6-896c-364d0af5e3f1 does not exist
Nov 22 00:29:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f6b83453-3c9e-4a0a-9995-ec4b66ed4069 does not exist
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:29:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:29:34 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 22 00:29:34 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 22 00:29:34 np0005531754 podman[109544]: 2025-11-22 05:29:34.974676047 +0000 UTC m=+0.074268627 container create 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:29:35 np0005531754 systemd[1]: Started libpod-conmon-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope.
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:34.94608034 +0000 UTC m=+0.045672910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:35.075919557 +0000 UTC m=+0.175512187 container init 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:35.0847654 +0000 UTC m=+0.184357990 container start 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:29:35 np0005531754 magical_rhodes[109561]: 167 167
Nov 22 00:29:35 np0005531754 systemd[1]: libpod-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope: Deactivated successfully.
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:35.089959228 +0000 UTC m=+0.189551818 container attach 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:35.090869712 +0000 UTC m=+0.190462292 container died 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:29:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-24a888ee51b65521e24e48a1f493dbed213c7ab6a85ae92c6e2946fe9aaac14d-merged.mount: Deactivated successfully.
Nov 22 00:29:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:29:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:29:35 np0005531754 podman[109544]: 2025-11-22 05:29:35.15728551 +0000 UTC m=+0.256878070 container remove 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:29:35 np0005531754 systemd[1]: libpod-conmon-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope: Deactivated successfully.
Nov 22 00:29:35 np0005531754 podman[109587]: 2025-11-22 05:29:35.344514144 +0000 UTC m=+0.026852991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:35 np0005531754 podman[109587]: 2025-11-22 05:29:35.441723447 +0000 UTC m=+0.124062294 container create 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:29:35 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 22 00:29:35 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 22 00:29:35 np0005531754 systemd[1]: Started libpod-conmon-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope.
Nov 22 00:29:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:35 np0005531754 podman[109587]: 2025-11-22 05:29:35.638754872 +0000 UTC m=+0.321093829 container init 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:29:35 np0005531754 podman[109587]: 2025-11-22 05:29:35.649728432 +0000 UTC m=+0.332067299 container start 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:29:35 np0005531754 podman[109587]: 2025-11-22 05:29:35.654720724 +0000 UTC m=+0.337059671 container attach 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:29:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 22 00:29:36 np0005531754 python3.9[109773]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:29:36 np0005531754 stupefied_chatelet[109605]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:29:36 np0005531754 stupefied_chatelet[109605]: --> relative data size: 1.0
Nov 22 00:29:36 np0005531754 stupefied_chatelet[109605]: --> All data devices are unavailable
Nov 22 00:29:36 np0005531754 systemd[1]: libpod-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Deactivated successfully.
Nov 22 00:29:36 np0005531754 systemd[1]: libpod-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Consumed 1.056s CPU time.
Nov 22 00:29:36 np0005531754 podman[109587]: 2025-11-22 05:29:36.781803542 +0000 UTC m=+1.464142439 container died 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:29:36 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890-merged.mount: Deactivated successfully.
Nov 22 00:29:36 np0005531754 podman[109587]: 2025-11-22 05:29:36.887326195 +0000 UTC m=+1.569665022 container remove 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:29:36 np0005531754 systemd[1]: libpod-conmon-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Deactivated successfully.
Nov 22 00:29:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:37 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 22 00:29:37 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 22 00:29:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 22 00:29:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.69575439 +0000 UTC m=+0.047613381 container create 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:29:37 np0005531754 systemd[1]: Started libpod-conmon-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope.
Nov 22 00:29:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 22 00:29:37 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.676185082 +0000 UTC m=+0.028044073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.79175248 +0000 UTC m=+0.143611531 container init 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.803532683 +0000 UTC m=+0.155391684 container start 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.807464566 +0000 UTC m=+0.159323567 container attach 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:29:37 np0005531754 friendly_spence[110140]: 167 167
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.809652814 +0000 UTC m=+0.161511775 container died 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:29:37 np0005531754 systemd[1]: libpod-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope: Deactivated successfully.
Nov 22 00:29:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e38034dfe3b8c006b790edeb155770b81b1093b83420de3ddc0be47e32c593ee-merged.mount: Deactivated successfully.
Nov 22 00:29:37 np0005531754 podman[110101]: 2025-11-22 05:29:37.851466481 +0000 UTC m=+0.203325462 container remove 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:29:37 np0005531754 systemd[1]: libpod-conmon-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope: Deactivated successfully.
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.00487101 +0000 UTC m=+0.050072165 container create 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:29:38 np0005531754 systemd[1]: Started libpod-conmon-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope.
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:37.983032613 +0000 UTC m=+0.028233808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.101027555 +0000 UTC m=+0.146228760 container init 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.116390002 +0000 UTC m=+0.161591157 container start 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.120019579 +0000 UTC m=+0.165220814 container attach 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:29:38 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 22 00:29:38 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 22 00:29:38 np0005531754 python3.9[110289]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]: {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    "0": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "devices": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "/dev/loop3"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            ],
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_name": "ceph_lv0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_size": "21470642176",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "name": "ceph_lv0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "tags": {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_name": "ceph",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.crush_device_class": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.encrypted": "0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_id": "0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.vdo": "0"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            },
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "vg_name": "ceph_vg0"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        }
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    ],
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    "1": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "devices": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "/dev/loop4"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            ],
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_name": "ceph_lv1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_size": "21470642176",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "name": "ceph_lv1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "tags": {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_name": "ceph",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.crush_device_class": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.encrypted": "0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_id": "1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.vdo": "0"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            },
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "vg_name": "ceph_vg1"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        }
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    ],
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    "2": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "devices": [
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "/dev/loop5"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            ],
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_name": "ceph_lv2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_size": "21470642176",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "name": "ceph_lv2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "tags": {
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.cluster_name": "ceph",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.crush_device_class": "",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.encrypted": "0",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osd_id": "2",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:                "ceph.vdo": "0"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            },
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "type": "block",
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:            "vg_name": "ceph_vg2"
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:        }
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]:    ]
Nov 22 00:29:38 np0005531754 youthful_dijkstra[110209]: }
Nov 22 00:29:38 np0005531754 systemd[1]: libpod-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope: Deactivated successfully.
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.906977095 +0000 UTC m=+0.952178270 container died 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:29:38 np0005531754 systemd[1]: var-lib-containers-storage-overlay-09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68-merged.mount: Deactivated successfully.
Nov 22 00:29:38 np0005531754 podman[110193]: 2025-11-22 05:29:38.966077239 +0000 UTC m=+1.011278374 container remove 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:29:38 np0005531754 systemd[1]: libpod-conmon-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope: Deactivated successfully.
Nov 22 00:29:39 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 22 00:29:39 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 22 00:29:39 np0005531754 python3.9[110577]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.688916559 +0000 UTC m=+0.066700836 container create 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:29:39 np0005531754 systemd[1]: Started libpod-conmon-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope.
Nov 22 00:29:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.660376164 +0000 UTC m=+0.038160531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:39 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.78796804 +0000 UTC m=+0.165752397 container init 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.795340345 +0000 UTC m=+0.173124662 container start 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.799713641 +0000 UTC m=+0.177498008 container attach 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:29:39 np0005531754 dreamy_curie[110613]: 167 167
Nov 22 00:29:39 np0005531754 systemd[1]: libpod-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope: Deactivated successfully.
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.801823067 +0000 UTC m=+0.179607374 container died 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:29:39 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6f31e4cb8efbabb6ebc1ccc487eb0cc13cea06e1e777ef9292dd787303589a9b-merged.mount: Deactivated successfully.
Nov 22 00:29:39 np0005531754 podman[110597]: 2025-11-22 05:29:39.853962547 +0000 UTC m=+0.231746854 container remove 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:29:39 np0005531754 systemd[1]: libpod-conmon-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope: Deactivated successfully.
Nov 22 00:29:40 np0005531754 podman[110691]: 2025-11-22 05:29:40.055756647 +0000 UTC m=+0.041336854 container create 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:29:40 np0005531754 systemd[1]: Started libpod-conmon-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope.
Nov 22 00:29:40 np0005531754 podman[110691]: 2025-11-22 05:29:40.040970736 +0000 UTC m=+0.026550963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:29:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:29:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:29:40 np0005531754 podman[110691]: 2025-11-22 05:29:40.168895352 +0000 UTC m=+0.154475659 container init 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:29:40 np0005531754 podman[110691]: 2025-11-22 05:29:40.183498838 +0000 UTC m=+0.169079085 container start 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:29:40 np0005531754 podman[110691]: 2025-11-22 05:29:40.189805745 +0000 UTC m=+0.175385972 container attach 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:29:40 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Nov 22 00:29:40 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Nov 22 00:29:40 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 22 00:29:40 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 22 00:29:40 np0005531754 python3.9[110812]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]: {
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_id": 1,
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "type": "bluestore"
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    },
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_id": 2,
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "type": "bluestore"
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    },
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_id": 0,
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:        "type": "bluestore"
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]:    }
Nov 22 00:29:41 np0005531754 elegant_blackwell[110755]: }
Nov 22 00:29:41 np0005531754 systemd[1]: libpod-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Deactivated successfully.
Nov 22 00:29:41 np0005531754 systemd[1]: libpod-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Consumed 1.108s CPU time.
Nov 22 00:29:41 np0005531754 podman[110691]: 2025-11-22 05:29:41.302068091 +0000 UTC m=+1.287648308 container died 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 00:29:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10-merged.mount: Deactivated successfully.
Nov 22 00:29:41 np0005531754 podman[110691]: 2025-11-22 05:29:41.382213983 +0000 UTC m=+1.367794230 container remove 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:29:41 np0005531754 systemd[1]: libpod-conmon-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Deactivated successfully.
Nov 22 00:29:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:29:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:29:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d20db80c-3c03-434c-b581-93671437425d does not exist
Nov 22 00:29:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev b6fa1616-e14d-4d75-b719-97f879348504 does not exist
Nov 22 00:29:41 np0005531754 python3.9[110992]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 00:29:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 22 00:29:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:42 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 22 00:29:42 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 22 00:29:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:29:42 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 22 00:29:42 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 22 00:29:42 np0005531754 python3.9[111208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:29:43 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 22 00:29:43 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 22 00:29:43 np0005531754 python3.9[111361]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:29:43
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:29:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:29:45 np0005531754 python3.9[111439]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:29:45 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 22 00:29:45 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 22 00:29:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 22 00:29:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 22 00:29:46 np0005531754 python3.9[111591]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:29:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 00:29:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 00:29:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:47 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 22 00:29:47 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 22 00:29:47 np0005531754 python3.9[111745]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 00:29:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:48 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 22 00:29:48 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 22 00:29:48 np0005531754 python3.9[111898]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 00:29:49 np0005531754 python3.9[112051]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:29:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:50 np0005531754 python3.9[112203]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 00:29:50 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Nov 22 00:29:50 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 22 00:29:50 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Nov 22 00:29:50 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 22 00:29:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 00:29:51 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 00:29:51 np0005531754 python3.9[112355]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:29:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 22 00:29:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 22 00:29:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:52 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 00:29:52 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:29:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:29:53 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 22 00:29:53 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 22 00:29:53 np0005531754 python3.9[112508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:29:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:54 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 22 00:29:54 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 22 00:29:54 np0005531754 python3.9[112660]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:29:55 np0005531754 python3.9[112738]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:29:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:55 np0005531754 python3.9[112890]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:29:56 np0005531754 python3.9[112968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:29:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:29:57 np0005531754 systemd[77455]: Created slice User Background Tasks Slice.
Nov 22 00:29:57 np0005531754 systemd[77455]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 00:29:57 np0005531754 systemd[77455]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 00:29:57 np0005531754 python3.9[113121]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:29:57 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 00:29:57 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 00:29:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:29:58 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 22 00:29:58 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 22 00:29:58 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 00:29:58 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 00:29:58 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 22 00:29:58 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 22 00:29:59 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 00:29:59 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 00:29:59 np0005531754 python3.9[113272]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:29:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:00 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 22 00:30:00 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 22 00:30:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 22 00:30:00 np0005531754 python3.9[113424]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 00:30:00 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 22 00:30:00 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 00:30:00 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 00:30:01 np0005531754 python3.9[113574]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:30:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:02 np0005531754 python3.9[113726]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:30:02 np0005531754 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 00:30:02 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 00:30:02 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 00:30:02 np0005531754 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 00:30:02 np0005531754 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 00:30:02 np0005531754 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 00:30:02 np0005531754 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 00:30:03 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 22 00:30:03 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 22 00:30:03 np0005531754 python3.9[113888]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 00:30:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:04 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 22 00:30:04 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 22 00:30:05 np0005531754 python3.9[114040]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:30:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:06 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 00:30:06 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 00:30:06 np0005531754 python3.9[114194]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:30:07 np0005531754 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Nov 22 00:30:07 np0005531754 systemd[1]: session-34.scope: Deactivated successfully.
Nov 22 00:30:07 np0005531754 systemd[1]: session-34.scope: Consumed 1min 5.986s CPU time.
Nov 22 00:30:07 np0005531754 systemd-logind[798]: Removed session 34.
Nov 22 00:30:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:08 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 00:30:08 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 00:30:08 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 22 00:30:08 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 22 00:30:09 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 22 00:30:09 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 22 00:30:09 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 00:30:09 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 00:30:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:10 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 22 00:30:10 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 22 00:30:10 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 22 00:30:10 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 22 00:30:11 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 22 00:30:11 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 22 00:30:11 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 22 00:30:11 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 22 00:30:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:11 np0005531754 systemd-logind[798]: New session 35 of user zuul.
Nov 22 00:30:11 np0005531754 systemd[1]: Started Session 35 of User zuul.
Nov 22 00:30:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:12 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 00:30:12 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 00:30:13 np0005531754 python3.9[114374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:30:13 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 00:30:13 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 00:30:13 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 22 00:30:13 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:14 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 00:30:14 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 00:30:14 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 22 00:30:14 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 22 00:30:14 np0005531754 python3.9[114530]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 00:30:15 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 22 00:30:15 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 22 00:30:15 np0005531754 python3.9[114683]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:30:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:16 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 00:30:16 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 00:30:16 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 22 00:30:16 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 22 00:30:16 np0005531754 python3.9[114767]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 00:30:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:17 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 00:30:17 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 00:30:17 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 22 00:30:17 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 22 00:30:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 00:30:18 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 00:30:18 np0005531754 python3.9[114920]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:30:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:20 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 22 00:30:20 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 22 00:30:20 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 22 00:30:20 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 22 00:30:21 np0005531754 python3.9[115073]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:30:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 00:30:21 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 00:30:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:21 np0005531754 python3.9[115226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:30:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:22 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 22 00:30:22 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 22 00:30:23 np0005531754 python3.9[115378]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 00:30:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 22 00:30:23 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 22 00:30:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:24 np0005531754 python3.9[115528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:30:25 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 00:30:25 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 00:30:25 np0005531754 python3.9[115686]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:30:25 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 22 00:30:25 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 22 00:30:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:26 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 22 00:30:26 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 22 00:30:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Nov 22 00:30:27 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Nov 22 00:30:27 np0005531754 python3.9[115839]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:30:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 22 00:30:28 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 22 00:30:29 np0005531754 python3.9[116126]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 00:30:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:30 np0005531754 python3.9[116276]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:30:30 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 00:30:30 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 00:30:31 np0005531754 python3.9[116430]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:30:31 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Nov 22 00:30:31 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Nov 22 00:30:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 22 00:30:31 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 22 00:30:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 00:30:32 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 00:30:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 22 00:30:32 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 22 00:30:33 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 00:30:33 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 00:30:33 np0005531754 python3.9[116583]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:30:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:35 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 00:30:35 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 00:30:35 np0005531754 python3.9[116736]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:30:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:36 np0005531754 python3.9[116890]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 22 00:30:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 22 00:30:37 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 22 00:30:37 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 22 00:30:37 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 22 00:30:37 np0005531754 systemd[1]: session-35.scope: Deactivated successfully.
Nov 22 00:30:37 np0005531754 systemd[1]: session-35.scope: Consumed 19.762s CPU time.
Nov 22 00:30:37 np0005531754 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Nov 22 00:30:37 np0005531754 systemd-logind[798]: Removed session 35.
Nov 22 00:30:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 00:30:39 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 00:30:39 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 22 00:30:39 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 22 00:30:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:40 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 22 00:30:40 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 22 00:30:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 00:30:41 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 00:30:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:30:42 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3f1756ae-39d6-435f-b817-d438424d80c8 does not exist
Nov 22 00:30:42 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c83876d6-0b24-4d22-ab67-f6787c740985 does not exist
Nov 22 00:30:42 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 36ab5fbc-fb9f-4e99-9a7a-28807d8cbd8f does not exist
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:30:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:30:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 00:30:43 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 00:30:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:30:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:30:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:30:43 np0005531754 systemd-logind[798]: New session 36 of user zuul.
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.301462367 +0000 UTC m=+0.050231752 container create 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:30:43 np0005531754 systemd[1]: Started Session 36 of User zuul.
Nov 22 00:30:43 np0005531754 systemd[1]: Started libpod-conmon-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope.
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.275869039 +0000 UTC m=+0.024638454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:30:43 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.397575736 +0000 UTC m=+0.146345201 container init 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.410555759 +0000 UTC m=+0.159325134 container start 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.414698269 +0000 UTC m=+0.163467724 container attach 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:30:43 np0005531754 bold_dewdney[117206]: 167 167
Nov 22 00:30:43 np0005531754 systemd[1]: libpod-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope: Deactivated successfully.
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.421379257 +0000 UTC m=+0.170148702 container died 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:30:43 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 22 00:30:43 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 22 00:30:43 np0005531754 systemd[1]: var-lib-containers-storage-overlay-66e1c30bd2d42116d305870d8f433f99d3b719c36f240894bb4ada205abc9eb8-merged.mount: Deactivated successfully.
Nov 22 00:30:43 np0005531754 podman[117187]: 2025-11-22 05:30:43.478757568 +0000 UTC m=+0.227526983 container remove 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:30:43 np0005531754 systemd[1]: libpod-conmon-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope: Deactivated successfully.
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:30:43
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr']
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:30:43 np0005531754 podman[117281]: 2025-11-22 05:30:43.695912454 +0000 UTC m=+0.056691083 container create 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:30:43 np0005531754 podman[117281]: 2025-11-22 05:30:43.667929272 +0000 UTC m=+0.028708011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:43 np0005531754 systemd[1]: Started libpod-conmon-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope.
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:30:43 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:30:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:30:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:43 np0005531754 podman[117281]: 2025-11-22 05:30:43.879341617 +0000 UTC m=+0.240120266 container init 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:30:43 np0005531754 podman[117281]: 2025-11-22 05:30:43.889162507 +0000 UTC m=+0.249941166 container start 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:30:43 np0005531754 podman[117281]: 2025-11-22 05:30:43.898444033 +0000 UTC m=+0.259222702 container attach 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:30:44 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 22 00:30:44 np0005531754 python3.9[117401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:30:44 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 22 00:30:45 np0005531754 flamboyant_keller[117297]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:30:45 np0005531754 flamboyant_keller[117297]: --> relative data size: 1.0
Nov 22 00:30:45 np0005531754 flamboyant_keller[117297]: --> All data devices are unavailable
Nov 22 00:30:45 np0005531754 podman[117281]: 2025-11-22 05:30:45.090711489 +0000 UTC m=+1.451490188 container died 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:30:45 np0005531754 systemd[1]: libpod-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Deactivated successfully.
Nov 22 00:30:45 np0005531754 systemd[1]: libpod-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Consumed 1.143s CPU time.
Nov 22 00:30:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762-merged.mount: Deactivated successfully.
Nov 22 00:30:45 np0005531754 podman[117281]: 2025-11-22 05:30:45.173876494 +0000 UTC m=+1.534655133 container remove 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:30:45 np0005531754 systemd[1]: libpod-conmon-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Deactivated successfully.
Nov 22 00:30:45 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 00:30:45 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 00:30:45 np0005531754 python3.9[117636]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:30:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:30:45 np0005531754 podman[117759]: 2025-11-22 05:30:45.964135513 +0000 UTC m=+0.052194535 container create 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:30:46 np0005531754 systemd[1]: Started libpod-conmon-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope.
Nov 22 00:30:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:45.949052773 +0000 UTC m=+0.037111795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:46.065827779 +0000 UTC m=+0.153886851 container init 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:46.076276346 +0000 UTC m=+0.164335408 container start 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:30:46 np0005531754 nervous_ritchie[117789]: 167 167
Nov 22 00:30:46 np0005531754 systemd[1]: libpod-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope: Deactivated successfully.
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:46.081911815 +0000 UTC m=+0.169970947 container attach 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:46.082696716 +0000 UTC m=+0.170755778 container died 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:30:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7ca365b306d057f3201adcc6d8afe8df75853c706e8e508e05b730de33683eb8-merged.mount: Deactivated successfully.
Nov 22 00:30:46 np0005531754 podman[117759]: 2025-11-22 05:30:46.125352426 +0000 UTC m=+0.213411448 container remove 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:30:46 np0005531754 systemd[1]: libpod-conmon-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope: Deactivated successfully.
Nov 22 00:30:46 np0005531754 podman[117860]: 2025-11-22 05:30:46.326626152 +0000 UTC m=+0.047046058 container create f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:30:46 np0005531754 systemd[1]: Started libpod-conmon-f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea.scope.
Nov 22 00:30:46 np0005531754 podman[117860]: 2025-11-22 05:30:46.30578774 +0000 UTC m=+0.026207626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:30:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:30:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:30:46 np0005531754 podman[117860]: 2025-11-22 05:30:46.425956075 +0000 UTC m=+0.146376031 container init f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:30:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 22 00:30:46 np0005531754 podman[117860]: 2025-11-22 05:30:46.439303889 +0000 UTC m=+0.159723755 container start f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 00:30:46 np0005531754 podman[117860]: 2025-11-22 05:30:46.444563048 +0000 UTC m=+0.164982954 container attach f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:30:46 np0005531754 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 22 00:30:46 np0005531754 python3.9[117983]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:30:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:30:47 np0005531754 boring_perlman[117905]: {
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:    "0": [
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:        {
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "devices": [
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "/dev/loop3"
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            ],
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "lv_name": "ceph_lv0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "lv_size": "21470642176",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "name": "ceph_lv0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:            "tags": {
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.cluster_name": "ceph",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.crush_device_class": "",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.encrypted": "0",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:30:47 np0005531754 boring_perlman[117905]:                "ceph.osd_id": "0",
Nov 22 00:33:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:02 np0005531754 rsyslogd[1005]: imjournal: 1597 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 00:33:02 np0005531754 systemd[1]: session-41.scope: Deactivated successfully.
Nov 22 00:33:02 np0005531754 systemd[1]: session-41.scope: Consumed 4.532s CPU time.
Nov 22 00:33:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:02 np0005531754 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Nov 22 00:33:02 np0005531754 systemd-logind[798]: Removed session 41.
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.297558868 +0000 UTC m=+0.052653513 container create 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:33:02 np0005531754 systemd[1]: Started libpod-conmon-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope.
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.27096844 +0000 UTC m=+0.026063135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:33:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.396464175 +0000 UTC m=+0.151558870 container init 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.409090656 +0000 UTC m=+0.164185311 container start 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.413584224 +0000 UTC m=+0.168678929 container attach 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:33:02 np0005531754 dazzling_roentgen[134153]: 167 167
Nov 22 00:33:02 np0005531754 systemd[1]: libpod-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope: Deactivated successfully.
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.4176302 +0000 UTC m=+0.172724875 container died 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:33:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8f7d6ac9c4f398636a5b7416a6431e91881f43560031ae51a01f45a764d4d734-merged.mount: Deactivated successfully.
Nov 22 00:33:02 np0005531754 podman[134136]: 2025-11-22 05:33:02.465335982 +0000 UTC m=+0.220430627 container remove 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:33:02 np0005531754 systemd[1]: libpod-conmon-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope: Deactivated successfully.
Nov 22 00:33:02 np0005531754 podman[134177]: 2025-11-22 05:33:02.706086682 +0000 UTC m=+0.072716500 container create fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:33:02 np0005531754 systemd[1]: Started libpod-conmon-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope.
Nov 22 00:33:02 np0005531754 podman[134177]: 2025-11-22 05:33:02.67746738 +0000 UTC m=+0.044097208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:33:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:33:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:02 np0005531754 podman[134177]: 2025-11-22 05:33:02.820407192 +0000 UTC m=+0.187037050 container init fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:33:02 np0005531754 podman[134177]: 2025-11-22 05:33:02.831563715 +0000 UTC m=+0.198193523 container start fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:33:02 np0005531754 podman[134177]: 2025-11-22 05:33:02.835460838 +0000 UTC m=+0.202090656 container attach fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:33:03 np0005531754 clever_kepler[134193]: {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    "0": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "devices": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "/dev/loop3"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            ],
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_name": "ceph_lv0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_size": "21470642176",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "name": "ceph_lv0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "tags": {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_name": "ceph",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.crush_device_class": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.encrypted": "0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_id": "0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.vdo": "0"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            },
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "vg_name": "ceph_vg0"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        }
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    ],
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    "1": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "devices": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "/dev/loop4"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            ],
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_name": "ceph_lv1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_size": "21470642176",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "name": "ceph_lv1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "tags": {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_name": "ceph",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.crush_device_class": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.encrypted": "0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_id": "1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.vdo": "0"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            },
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "vg_name": "ceph_vg1"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        }
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    ],
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    "2": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "devices": [
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "/dev/loop5"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            ],
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_name": "ceph_lv2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_size": "21470642176",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "name": "ceph_lv2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "tags": {
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.cluster_name": "ceph",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.crush_device_class": "",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.encrypted": "0",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osd_id": "2",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:                "ceph.vdo": "0"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            },
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "type": "block",
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:            "vg_name": "ceph_vg2"
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:        }
Nov 22 00:33:03 np0005531754 clever_kepler[134193]:    ]
Nov 22 00:33:03 np0005531754 clever_kepler[134193]: }
Nov 22 00:33:03 np0005531754 systemd[1]: libpod-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope: Deactivated successfully.
Nov 22 00:33:03 np0005531754 podman[134177]: 2025-11-22 05:33:03.60588034 +0000 UTC m=+0.972510208 container died fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:33:03 np0005531754 systemd[1]: var-lib-containers-storage-overlay-210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd-merged.mount: Deactivated successfully.
Nov 22 00:33:03 np0005531754 podman[134177]: 2025-11-22 05:33:03.667520899 +0000 UTC m=+1.034150707 container remove fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:33:03 np0005531754 systemd[1]: libpod-conmon-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope: Deactivated successfully.
Nov 22 00:33:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.433115455 +0000 UTC m=+0.044493799 container create 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:33:04 np0005531754 systemd[1]: Started libpod-conmon-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope.
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.412603517 +0000 UTC m=+0.023981871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:33:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.531997291 +0000 UTC m=+0.143375695 container init 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.544431568 +0000 UTC m=+0.155809922 container start 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.548431703 +0000 UTC m=+0.159810097 container attach 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:33:04 np0005531754 intelligent_babbage[134370]: 167 167
Nov 22 00:33:04 np0005531754 systemd[1]: libpod-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope: Deactivated successfully.
Nov 22 00:33:04 np0005531754 conmon[134370]: conmon 3bb523b6952f95c551aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope/container/memory.events
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.554060301 +0000 UTC m=+0.165438665 container died 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:33:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6e1a96e6070c3e32d9cd7d595c017cb424086faa8230c57fac777d02859c3e6e-merged.mount: Deactivated successfully.
Nov 22 00:33:04 np0005531754 podman[134354]: 2025-11-22 05:33:04.602645415 +0000 UTC m=+0.214023729 container remove 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:33:04 np0005531754 systemd[1]: libpod-conmon-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope: Deactivated successfully.
Nov 22 00:33:04 np0005531754 podman[134393]: 2025-11-22 05:33:04.791288008 +0000 UTC m=+0.054608855 container create 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:33:04 np0005531754 systemd[1]: Started libpod-conmon-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope.
Nov 22 00:33:04 np0005531754 podman[134393]: 2025-11-22 05:33:04.764066003 +0000 UTC m=+0.027386850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:33:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:33:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:33:04 np0005531754 podman[134393]: 2025-11-22 05:33:04.885931572 +0000 UTC m=+0.149252469 container init 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:33:04 np0005531754 podman[134393]: 2025-11-22 05:33:04.898983735 +0000 UTC m=+0.162304582 container start 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:33:04 np0005531754 podman[134393]: 2025-11-22 05:33:04.903923795 +0000 UTC m=+0.167244622 container attach 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:33:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]: {
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_id": 1,
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "type": "bluestore"
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    },
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_id": 2,
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "type": "bluestore"
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    },
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_id": 0,
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:        "type": "bluestore"
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]:    }
Nov 22 00:33:06 np0005531754 vigorous_khayyam[134409]: }
Nov 22 00:33:06 np0005531754 systemd[1]: libpod-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Deactivated successfully.
Nov 22 00:33:06 np0005531754 systemd[1]: libpod-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Consumed 1.166s CPU time.
Nov 22 00:33:06 np0005531754 podman[134393]: 2025-11-22 05:33:06.056129969 +0000 UTC m=+1.319450806 container died 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:33:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808-merged.mount: Deactivated successfully.
Nov 22 00:33:06 np0005531754 podman[134393]: 2025-11-22 05:33:06.135526773 +0000 UTC m=+1.398847570 container remove 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:33:06 np0005531754 systemd[1]: libpod-conmon-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Deactivated successfully.
Nov 22 00:33:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:33:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:33:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:33:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:33:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d964052e-96fe-43bb-92af-3bdb7e57fc71 does not exist
Nov 22 00:33:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 170145ac-a64c-4b60-9f8d-5502bd159b97 does not exist
Nov 22 00:33:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:33:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:33:07 np0005531754 systemd-logind[798]: New session 42 of user zuul.
Nov 22 00:33:07 np0005531754 systemd[1]: Started Session 42 of User zuul.
Nov 22 00:33:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:08 np0005531754 python3.9[134656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:33:09 np0005531754 python3.9[134812]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:33:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:10 np0005531754 python3.9[134896]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 00:33:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:12 np0005531754 python3.9[135047]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:14 np0005531754 python3.9[135198]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:33:15 np0005531754 python3.9[135348]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:33:15 np0005531754 python3.9[135498]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:33:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:16 np0005531754 systemd[1]: session-42.scope: Deactivated successfully.
Nov 22 00:33:16 np0005531754 systemd[1]: session-42.scope: Consumed 6.540s CPU time.
Nov 22 00:33:16 np0005531754 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Nov 22 00:33:16 np0005531754 systemd-logind[798]: Removed session 42.
Nov 22 00:33:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:21 np0005531754 systemd-logind[798]: New session 43 of user zuul.
Nov 22 00:33:21 np0005531754 systemd[1]: Started Session 43 of User zuul.
Nov 22 00:33:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:22 np0005531754 python3.9[135676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:33:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:24 np0005531754 python3.9[135832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:25 np0005531754 python3.9[135984]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:26 np0005531754 python3.9[136136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:26 np0005531754 python3.9[136259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789605.2980092-65-175876197547155/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=77951e7b50336235552278fe09afe27239a664e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:27 np0005531754 python3.9[136411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:28 np0005531754 python3.9[136534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789607.0948963-65-96015583973532/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cc36de41baf31cbc96876a4c978b53cf42c09af0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:29 np0005531754 python3.9[136686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:29 np0005531754 python3.9[136809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789608.550052-65-185223896507610/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=140c9cb76d53049a1cbc8d30c6f9b46f7bce8deb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:30 np0005531754 python3.9[136961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:31 np0005531754 python3.9[137113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:32 np0005531754 python3.9[137265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:32 np0005531754 python3.9[137388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789611.5777376-124-6411806704758/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8ebf332b73c2404b9ed7c58d624aeca8bfa0f3b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:33 np0005531754 python3.9[137540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:34 np0005531754 python3.9[137663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789612.972755-124-262982423294656/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fa92895d4da00e1b95bf94f57650d8e2b328cd80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:35 np0005531754 python3.9[137815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:35 np0005531754 python3.9[137938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789614.3956342-124-54353405290954/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d1d78ab8a7a90917655f98d9b0e79cb5a7a3c349 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:36 np0005531754 python3.9[138090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.136255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617136303, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1793, "num_deletes": 252, "total_data_size": 2587734, "memory_usage": 2635864, "flush_reason": "Manual Compaction"}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617150984, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1515194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7332, "largest_seqno": 9124, "table_properties": {"data_size": 1509313, "index_size": 2700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17300, "raw_average_key_size": 20, "raw_value_size": 1495308, "raw_average_value_size": 1810, "num_data_blocks": 127, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789449, "oldest_key_time": 1763789449, "file_creation_time": 1763789617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14808 microseconds, and 7886 cpu microseconds.
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.151060) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1515194 bytes OK
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.151088) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.152951) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.152977) EVENT_LOG_v1 {"time_micros": 1763789617152967, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.153003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2579779, prev total WAL file size 2579779, number of live WAL files 2.
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.154522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1479KB)], [20(6934KB)]
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617154592, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8616614, "oldest_snapshot_seqno": -1}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3426 keys, 6901522 bytes, temperature: kUnknown
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617201643, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6901522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6875162, "index_size": 16714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81861, "raw_average_key_size": 23, "raw_value_size": 6809806, "raw_average_value_size": 1987, "num_data_blocks": 741, "num_entries": 3426, "num_filter_entries": 3426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.201982) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6901522 bytes
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.203590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.8 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(10.2) write-amplify(4.6) OK, records in: 3864, records dropped: 438 output_compression: NoCompression
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.203623) EVENT_LOG_v1 {"time_micros": 1763789617203607, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47152, "compaction_time_cpu_micros": 31796, "output_level": 6, "num_output_files": 1, "total_output_size": 6901522, "num_input_records": 3864, "num_output_records": 3426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617204213, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617206396, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.154371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:33:37 np0005531754 python3.9[138242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:38 np0005531754 python3.9[138394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:38 np0005531754 python3.9[138517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789617.6379368-183-102035034263227/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4ff65cfce7b98fb4d1bc2d32d002eedf36c3c536 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:39 np0005531754 python3.9[138669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:40 np0005531754 python3.9[138792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789619.05904-183-253288449109497/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fa92895d4da00e1b95bf94f57650d8e2b328cd80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:41 np0005531754 python3.9[138944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:41 np0005531754 python3.9[139067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789620.5042253-183-105727498825125/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6d3f64c209326b8bc026e1d9bfc49bdb76d81f14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:43 np0005531754 python3.9[139219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:33:43
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:33:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:44 np0005531754 python3.9[139371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:44 np0005531754 python3.9[139494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789623.4509308-251-1747911200371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:45 np0005531754 python3.9[139646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:46 np0005531754 python3.9[139798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:47 np0005531754 python3.9[139921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789625.8758318-275-220002299610371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:48 np0005531754 python3.9[140073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:48 np0005531754 python3.9[140225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:49 np0005531754 python3.9[140348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789628.3483827-299-45977039908410/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:50 np0005531754 python3.9[140500]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:51 np0005531754 python3.9[140652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:52 np0005531754 python3.9[140775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789630.7861557-323-164303567492929/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:33:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:33:52 np0005531754 python3.9[140927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:53 np0005531754 python3.9[141079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:54 np0005531754 python3.9[141202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789633.1954396-347-126884892009720/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:55 np0005531754 python3.9[141354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:33:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:56 np0005531754 python3.9[141506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:33:56 np0005531754 python3.9[141629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789635.53907-371-65632485830244/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:33:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:33:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2062 writes, 9128 keys, 2062 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2062 writes, 2062 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2062 writes, 9128 keys, 2062 commit groups, 1.0 writes per commit group, ingest: 11.08 MB, 0.02 MB/s#012Interval WAL: 2062 writes, 2062 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    116.8      0.07              0.03         3    0.024       0      0       0.0       0.0#012  L6      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    174.7    155.2      0.09              0.05         2    0.043    7192    728       0.0       0.0#012 Sum      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     95.9    137.9      0.16              0.08         5    0.031    7192    728       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    102.2    146.6      0.15              0.08         4    0.037    7192    728       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    174.7    155.2      0.09              0.05         2    0.043    7192    728       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    134.5      0.06              0.03         2    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 308.00 MB usage: 541.33 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,450.11 KB,0.142714%) FilterBlock(6,27.80 KB,0.00881344%) IndexBlock(6,63.42 KB,0.0201089%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 00:33:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:33:57 np0005531754 systemd[1]: session-43.scope: Deactivated successfully.
Nov 22 00:33:57 np0005531754 systemd[1]: session-43.scope: Consumed 28.379s CPU time.
Nov 22 00:33:57 np0005531754 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Nov 22 00:33:57 np0005531754 systemd-logind[798]: Removed session 43.
Nov 22 00:33:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:33:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:03 np0005531754 systemd-logind[798]: New session 44 of user zuul.
Nov 22 00:34:03 np0005531754 systemd[1]: Started Session 44 of User zuul.
Nov 22 00:34:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:04 np0005531754 python3.9[141809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:05 np0005531754 python3.9[141961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:05 np0005531754 python3.9[142084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789644.3873174-34-261050912144302/.source.conf _original_basename=ceph.conf follow=False checksum=1263ab632842a96ecd941a91f52f1b587861adae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:06 np0005531754 python3.9[142285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e584d636-a0e0-4275-9ba0-5e3653f6238e does not exist
Nov 22 00:34:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 0796c27e-9c1d-47d3-8e88-ffe2b2e280b7 does not exist
Nov 22 00:34:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 87390f8c-e5f3-494e-9444-b45a323d0bf1 does not exist
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:34:07 np0005531754 python3.9[142488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789646.1903982-34-185735434447221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=18cfea5729768871b1211ef73b57421c54974f8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:34:07 np0005531754 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Nov 22 00:34:07 np0005531754 systemd[1]: session-44.scope: Deactivated successfully.
Nov 22 00:34:07 np0005531754 systemd[1]: session-44.scope: Consumed 3.306s CPU time.
Nov 22 00:34:07 np0005531754 systemd-logind[798]: Removed session 44.
Nov 22 00:34:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.041560817 +0000 UTC m=+0.048407213 container create 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 00:34:08 np0005531754 systemd[1]: Started libpod-conmon-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope.
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.015459831 +0000 UTC m=+0.022306317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.150422229 +0000 UTC m=+0.157268665 container init 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.165018943 +0000 UTC m=+0.171865369 container start 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.169064179 +0000 UTC m=+0.175910635 container attach 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 00:34:08 np0005531754 dazzling_elbakyan[142670]: 167 167
Nov 22 00:34:08 np0005531754 systemd[1]: libpod-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope: Deactivated successfully.
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.175335025 +0000 UTC m=+0.182181451 container died 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:34:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-da78836cb6b18e1d109916d1ba6a7f8d1ee64aedf729ae19b0d3866b9a3b6519-merged.mount: Deactivated successfully.
Nov 22 00:34:08 np0005531754 podman[142654]: 2025-11-22 05:34:08.22650921 +0000 UTC m=+0.233355646 container remove 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:34:08 np0005531754 systemd[1]: libpod-conmon-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope: Deactivated successfully.
Nov 22 00:34:08 np0005531754 podman[142694]: 2025-11-22 05:34:08.429930989 +0000 UTC m=+0.051049604 container create dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:34:08 np0005531754 systemd[1]: Started libpod-conmon-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope.
Nov 22 00:34:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:08 np0005531754 podman[142694]: 2025-11-22 05:34:08.489167826 +0000 UTC m=+0.110286441 container init dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:34:08 np0005531754 podman[142694]: 2025-11-22 05:34:08.408275739 +0000 UTC m=+0.029394374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:08 np0005531754 podman[142694]: 2025-11-22 05:34:08.502799175 +0000 UTC m=+0.123917790 container start dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:34:08 np0005531754 podman[142694]: 2025-11-22 05:34:08.505350911 +0000 UTC m=+0.126469526 container attach dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:34:09 np0005531754 eager_jennings[142711]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:34:09 np0005531754 eager_jennings[142711]: --> relative data size: 1.0
Nov 22 00:34:09 np0005531754 eager_jennings[142711]: --> All data devices are unavailable
Nov 22 00:34:09 np0005531754 systemd[1]: libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Deactivated successfully.
Nov 22 00:34:09 np0005531754 systemd[1]: libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Consumed 1.059s CPU time.
Nov 22 00:34:09 np0005531754 conmon[142711]: conmon dc159a4b2c8ca68abd13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope/container/memory.events
Nov 22 00:34:09 np0005531754 podman[142694]: 2025-11-22 05:34:09.605726734 +0000 UTC m=+1.226845369 container died dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:34:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390-merged.mount: Deactivated successfully.
Nov 22 00:34:09 np0005531754 podman[142694]: 2025-11-22 05:34:09.677785529 +0000 UTC m=+1.298904154 container remove dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:34:09 np0005531754 systemd[1]: libpod-conmon-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Deactivated successfully.
Nov 22 00:34:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.420281971 +0000 UTC m=+0.045553818 container create 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:34:10 np0005531754 systemd[1]: Started libpod-conmon-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope.
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.401185439 +0000 UTC m=+0.026457306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.519334146 +0000 UTC m=+0.144606063 container init 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.530164501 +0000 UTC m=+0.155436368 container start 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.534563727 +0000 UTC m=+0.159835634 container attach 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:34:10 np0005531754 funny_jemison[142906]: 167 167
Nov 22 00:34:10 np0005531754 systemd[1]: libpod-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope: Deactivated successfully.
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.538660884 +0000 UTC m=+0.163932721 container died 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:34:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-78f4f62d3c3adfbcd5ca309070f9cc909e7c51066d49a07e9042e2975329559f-merged.mount: Deactivated successfully.
Nov 22 00:34:10 np0005531754 podman[142890]: 2025-11-22 05:34:10.678813589 +0000 UTC m=+0.304085456 container remove 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:34:10 np0005531754 systemd[1]: libpod-conmon-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope: Deactivated successfully.
Nov 22 00:34:10 np0005531754 podman[142930]: 2025-11-22 05:34:10.923937134 +0000 UTC m=+0.070568387 container create 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:34:10 np0005531754 systemd[1]: Started libpod-conmon-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope.
Nov 22 00:34:10 np0005531754 podman[142930]: 2025-11-22 05:34:10.897077348 +0000 UTC m=+0.043708651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:11 np0005531754 podman[142930]: 2025-11-22 05:34:11.044069723 +0000 UTC m=+0.190701026 container init 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:34:11 np0005531754 podman[142930]: 2025-11-22 05:34:11.053571242 +0000 UTC m=+0.200202465 container start 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:34:11 np0005531754 podman[142930]: 2025-11-22 05:34:11.057028474 +0000 UTC m=+0.203659727 container attach 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]: {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    "0": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "devices": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "/dev/loop3"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            ],
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_name": "ceph_lv0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_size": "21470642176",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "name": "ceph_lv0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "tags": {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_name": "ceph",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.crush_device_class": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.encrypted": "0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_id": "0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.vdo": "0"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            },
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "vg_name": "ceph_vg0"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        }
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    ],
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    "1": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "devices": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "/dev/loop4"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            ],
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_name": "ceph_lv1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_size": "21470642176",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "name": "ceph_lv1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "tags": {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_name": "ceph",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.crush_device_class": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.encrypted": "0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_id": "1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.vdo": "0"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            },
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "vg_name": "ceph_vg1"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        }
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    ],
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    "2": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "devices": [
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "/dev/loop5"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            ],
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_name": "ceph_lv2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_size": "21470642176",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "name": "ceph_lv2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "tags": {
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.cluster_name": "ceph",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.crush_device_class": "",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.encrypted": "0",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osd_id": "2",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:                "ceph.vdo": "0"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            },
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "type": "block",
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:            "vg_name": "ceph_vg2"
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:        }
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]:    ]
Nov 22 00:34:11 np0005531754 exciting_khayyam[142946]: }
Nov 22 00:34:11 np0005531754 systemd[1]: libpod-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope: Deactivated successfully.
Nov 22 00:34:11 np0005531754 podman[142930]: 2025-11-22 05:34:11.809049897 +0000 UTC m=+0.955681120 container died 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:34:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c-merged.mount: Deactivated successfully.
Nov 22 00:34:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:11 np0005531754 podman[142930]: 2025-11-22 05:34:11.873641455 +0000 UTC m=+1.020272708 container remove 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:34:11 np0005531754 systemd[1]: libpod-conmon-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope: Deactivated successfully.
Nov 22 00:34:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.623728167 +0000 UTC m=+0.051965457 container create 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:34:12 np0005531754 systemd[1]: Started libpod-conmon-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope.
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.598034021 +0000 UTC m=+0.026271371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.731009477 +0000 UTC m=+0.159246847 container init 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.740627 +0000 UTC m=+0.168864301 container start 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:34:12 np0005531754 fervent_blackburn[143123]: 167 167
Nov 22 00:34:12 np0005531754 systemd[1]: libpod-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope: Deactivated successfully.
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.744682037 +0000 UTC m=+0.172919337 container attach 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.74817604 +0000 UTC m=+0.176413330 container died 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:34:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e0ae79c65c536224c233ed8ce635d2603d7739a382064ec50c1594cdbfe1da92-merged.mount: Deactivated successfully.
Nov 22 00:34:12 np0005531754 podman[143107]: 2025-11-22 05:34:12.80334637 +0000 UTC m=+0.231583650 container remove 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:34:12 np0005531754 systemd[1]: libpod-conmon-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope: Deactivated successfully.
Nov 22 00:34:12 np0005531754 systemd-logind[798]: New session 45 of user zuul.
Nov 22 00:34:12 np0005531754 systemd[1]: Started Session 45 of User zuul.
Nov 22 00:34:13 np0005531754 podman[143150]: 2025-11-22 05:34:13.033286585 +0000 UTC m=+0.070905874 container create 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 00:34:13 np0005531754 podman[143150]: 2025-11-22 05:34:12.999905349 +0000 UTC m=+0.037524698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:34:13 np0005531754 systemd[1]: Started libpod-conmon-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope.
Nov 22 00:34:13 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:34:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:34:13 np0005531754 podman[143150]: 2025-11-22 05:34:13.152186682 +0000 UTC m=+0.189805961 container init 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:34:13 np0005531754 podman[143150]: 2025-11-22 05:34:13.16540515 +0000 UTC m=+0.203024399 container start 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:34:13 np0005531754 podman[143150]: 2025-11-22 05:34:13.168522922 +0000 UTC m=+0.206142171 container attach 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:13 np0005531754 python3.9[143319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]: {
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_id": 1,
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "type": "bluestore"
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    },
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_id": 2,
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "type": "bluestore"
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    },
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_id": 0,
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:        "type": "bluestore"
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]:    }
Nov 22 00:34:14 np0005531754 sharp_mclean[143217]: }
Nov 22 00:34:14 np0005531754 systemd[1]: libpod-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Deactivated successfully.
Nov 22 00:34:14 np0005531754 systemd[1]: libpod-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Consumed 1.091s CPU time.
Nov 22 00:34:14 np0005531754 podman[143150]: 2025-11-22 05:34:14.252981125 +0000 UTC m=+1.290600434 container died 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:34:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198-merged.mount: Deactivated successfully.
Nov 22 00:34:14 np0005531754 podman[143150]: 2025-11-22 05:34:14.321100656 +0000 UTC m=+1.358719915 container remove 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:34:14 np0005531754 systemd[1]: libpod-conmon-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Deactivated successfully.
Nov 22 00:34:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:34:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:34:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f0f99895-8adc-4253-bf43-beee7c9bcc63 does not exist
Nov 22 00:34:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d5c40b52-0e08-432a-b36e-a33173f3ace1 does not exist
Nov 22 00:34:15 np0005531754 python3.9[143564]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:34:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:34:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:16 np0005531754 python3.9[143716]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:34:17 np0005531754 python3.9[143866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:34:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:18 np0005531754 python3.9[144020]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 00:34:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:20 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 22 00:34:20 np0005531754 python3.9[144176]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:34:21 np0005531754 python3.9[144260]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.469551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661469745, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 602, "num_deletes": 251, "total_data_size": 678484, "memory_usage": 689576, "flush_reason": "Manual Compaction"}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661475856, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 672652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9125, "largest_seqno": 9726, "table_properties": {"data_size": 669384, "index_size": 1176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7228, "raw_average_key_size": 18, "raw_value_size": 662863, "raw_average_value_size": 1690, "num_data_blocks": 54, "num_entries": 392, "num_filter_entries": 392, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789617, "oldest_key_time": 1763789617, "file_creation_time": 1763789661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6381 microseconds, and 2806 cpu microseconds.
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.475943) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 672652 bytes OK
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.475994) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478214) EVENT_LOG_v1 {"time_micros": 1763789661478209, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 675195, prev total WAL file size 675195, number of live WAL files 2.
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.479029) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(656KB)], [23(6739KB)]
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661479105, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7574174, "oldest_snapshot_seqno": -1}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3304 keys, 6046751 bytes, temperature: kUnknown
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661514628, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6046751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6022576, "index_size": 14786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80164, "raw_average_key_size": 24, "raw_value_size": 5960755, "raw_average_value_size": 1804, "num_data_blocks": 645, "num_entries": 3304, "num_filter_entries": 3304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.514919) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6046751 bytes
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.516870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.7 rd, 169.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(20.2) write-amplify(9.0) OK, records in: 3818, records dropped: 514 output_compression: NoCompression
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.516901) EVENT_LOG_v1 {"time_micros": 1763789661516886, "job": 8, "event": "compaction_finished", "compaction_time_micros": 35611, "compaction_time_cpu_micros": 17365, "output_level": 6, "num_output_files": 1, "total_output_size": 6046751, "num_input_records": 3818, "num_output_records": 3304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661517243, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661519644, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:34:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:23 np0005531754 python3.9[144413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:34:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:24 np0005531754 python3[144568]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 22 00:34:25 np0005531754 python3.9[144720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:26 np0005531754 python3.9[144872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:27 np0005531754 python3.9[144950]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:28 np0005531754 python3.9[145102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:28 np0005531754 python3.9[145180]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._1vy4_1d recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:29 np0005531754 python3.9[145332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:30 np0005531754 python3.9[145410]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:31 np0005531754 python3.9[145562]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:31 np0005531754 python3[145715]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 00:34:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:32 np0005531754 python3.9[145867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:33 np0005531754 python3.9[145992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789672.1297882-157-193595960011703/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:34 np0005531754 python3.9[146144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:35 np0005531754 python3.9[146269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789673.9106023-172-115541317199094/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:35 np0005531754 python3.9[146421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:36 np0005531754 python3.9[146546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789675.206675-187-29367413167044/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:37 np0005531754 python3.9[146698]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:37 np0005531754 python3.9[146823]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789676.5816927-202-218290269505715/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:38 np0005531754 python3.9[146975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:39 np0005531754 python3.9[147100]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789677.9419641-217-241713955397323/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:39 np0005531754 python3.9[147252]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:40 np0005531754 python3.9[147404]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:41 np0005531754 python3.9[147559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:42 np0005531754 python3.9[147711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:42 np0005531754 python3.9[147864]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:34:43
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta']
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:34:43 np0005531754 python3.9[148018]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:34:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:44 np0005531754 python3.9[148173]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:45 np0005531754 python3.9[148323]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:34:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:46 np0005531754 python3.9[148476]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:46 np0005531754 ovs-vsctl[148477]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 22 00:34:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:47 np0005531754 python3.9[148629]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:48 np0005531754 python3.9[148784]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:34:48 np0005531754 ovs-vsctl[148785]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 22 00:34:49 np0005531754 python3.9[148935]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:34:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:49 np0005531754 python3.9[149089]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:34:50 np0005531754 python3.9[149241]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:51 np0005531754 python3.9[149319]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:34:51 np0005531754 python3.9[149471]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:52 np0005531754 python3.9[149549]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:34:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:34:52 np0005531754 python3.9[149701]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:53 np0005531754 python3.9[149853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:53 np0005531754 python3.9[149931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:54 np0005531754 python3.9[150083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:55 np0005531754 python3.9[150161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:56 np0005531754 python3.9[150313]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:34:56 np0005531754 systemd[1]: Reloading.
Nov 22 00:34:56 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:34:56 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:34:57 np0005531754 python3.9[150502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:34:57 np0005531754 python3.9[150580]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:34:58 np0005531754 python3.9[150732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:34:58 np0005531754 python3.9[150810]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:34:59 np0005531754 python3.9[150962]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:34:59 np0005531754 systemd[1]: Reloading.
Nov 22 00:34:59 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:34:59 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:34:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:00 np0005531754 systemd[1]: Starting Create netns directory...
Nov 22 00:35:00 np0005531754 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 00:35:00 np0005531754 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 00:35:00 np0005531754 systemd[1]: Finished Create netns directory.
Nov 22 00:35:00 np0005531754 python3.9[151155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:01 np0005531754 python3.9[151307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:02 np0005531754 python3.9[151430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789701.1731179-468-122598031462066/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:03 np0005531754 python3.9[151582]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:03 np0005531754 python3.9[151734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:04 np0005531754 python3.9[151857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789703.3898373-493-33275843032799/.source.json _original_basename=.1q0p8czq follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:35:05 np0005531754 python3.9[152009]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:35:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:07 np0005531754 python3.9[152436]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 22 00:35:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:08 np0005531754 python3.9[152588]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 00:35:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:09 np0005531754 python3.9[152740]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 00:35:11 np0005531754 python3[152919]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 00:35:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:35:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:35:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:17 np0005531754 podman[152934]: 2025-11-22 05:35:17.200198032 +0000 UTC m=+5.343958246 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 00:35:17 np0005531754 podman[153290]: 2025-11-22 05:35:17.389892654 +0000 UTC m=+0.064909039 container create 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 22 00:35:17 np0005531754 podman[153290]: 2025-11-22 05:35:17.360054863 +0000 UTC m=+0.035071238 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 00:35:17 np0005531754 python3[152919]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:17 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c3ac82fb-6974-43b1-88f2-31a36f3f340e does not exist
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6cfcf25a-8676-46e2-9369-56ea72ea45f8 does not exist
Nov 22 00:35:17 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f478eb82-0d23-4284-bb13-43f0659ba8ce does not exist
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.307105463 +0000 UTC m=+0.043827222 container create ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:35:18 np0005531754 systemd[1]: Started libpod-conmon-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope.
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.287311602 +0000 UTC m=+0.024033391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:18 np0005531754 python3.9[153631]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.409821413 +0000 UTC m=+0.146543162 container init ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.421776426 +0000 UTC m=+0.158498405 container start ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:35:18 np0005531754 awesome_murdock[153651]: 167 167
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.42841907 +0000 UTC m=+0.165140839 container attach ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:35:18 np0005531754 systemd[1]: libpod-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope: Deactivated successfully.
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.429775518 +0000 UTC m=+0.166497267 container died ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:35:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5c14c4e294cb987b012e8b0d2947abdbfd77fc3d620c98e584ac0576169412ed-merged.mount: Deactivated successfully.
Nov 22 00:35:18 np0005531754 podman[153634]: 2025-11-22 05:35:18.476965532 +0000 UTC m=+0.213687311 container remove ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:35:18 np0005531754 systemd[1]: libpod-conmon-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope: Deactivated successfully.
Nov 22 00:35:18 np0005531754 podman[153700]: 2025-11-22 05:35:18.622734511 +0000 UTC m=+0.041126456 container create 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:35:18 np0005531754 systemd[1]: Started libpod-conmon-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope.
Nov 22 00:35:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:18 np0005531754 podman[153700]: 2025-11-22 05:35:18.607908039 +0000 UTC m=+0.026300014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:18 np0005531754 podman[153700]: 2025-11-22 05:35:18.716655406 +0000 UTC m=+0.135047421 container init 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:35:18 np0005531754 podman[153700]: 2025-11-22 05:35:18.728153356 +0000 UTC m=+0.146545341 container start 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:35:18 np0005531754 podman[153700]: 2025-11-22 05:35:18.732174188 +0000 UTC m=+0.150566173 container attach 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:35:19 np0005531754 python3.9[153848]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:35:19 np0005531754 nice_euclid[153716]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:35:19 np0005531754 nice_euclid[153716]: --> relative data size: 1.0
Nov 22 00:35:19 np0005531754 nice_euclid[153716]: --> All data devices are unavailable
Nov 22 00:35:19 np0005531754 systemd[1]: libpod-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Deactivated successfully.
Nov 22 00:35:19 np0005531754 podman[153700]: 2025-11-22 05:35:19.857505071 +0000 UTC m=+1.275897026 container died 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:35:19 np0005531754 systemd[1]: libpod-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Consumed 1.072s CPU time.
Nov 22 00:35:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0-merged.mount: Deactivated successfully.
Nov 22 00:35:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:19 np0005531754 podman[153700]: 2025-11-22 05:35:19.92892144 +0000 UTC m=+1.347313395 container remove 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:35:19 np0005531754 systemd[1]: libpod-conmon-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Deactivated successfully.
Nov 22 00:35:19 np0005531754 python3.9[153944]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.651042117 +0000 UTC m=+0.058403308 container create 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:35:20 np0005531754 systemd[1]: Started libpod-conmon-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope.
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.623317264 +0000 UTC m=+0.030678505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:20 np0005531754 python3.9[154251]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789720.0426662-581-96350675462284/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:35:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.749838847 +0000 UTC m=+0.157200068 container init 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.761213154 +0000 UTC m=+0.168574345 container start 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.766025198 +0000 UTC m=+0.173386389 container attach 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:35:20 np0005531754 elegant_jones[154268]: 167 167
Nov 22 00:35:20 np0005531754 systemd[1]: libpod-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope: Deactivated successfully.
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.769417712 +0000 UTC m=+0.176778903 container died 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:35:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0b7e73b4f28471de86e2b74b2c72052fc96cdc1e7e3106bdaee7cf3776f7444a-merged.mount: Deactivated successfully.
Nov 22 00:35:20 np0005531754 podman[154252]: 2025-11-22 05:35:20.823764646 +0000 UTC m=+0.231125837 container remove 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:35:20 np0005531754 systemd[1]: libpod-conmon-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope: Deactivated successfully.
Nov 22 00:35:21 np0005531754 podman[154334]: 2025-11-22 05:35:21.022032866 +0000 UTC m=+0.055138376 container create ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:35:21 np0005531754 systemd[1]: Started libpod-conmon-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope.
Nov 22 00:35:21 np0005531754 podman[154334]: 2025-11-22 05:35:21.00276092 +0000 UTC m=+0.035866430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:21 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:21 np0005531754 podman[154334]: 2025-11-22 05:35:21.127548464 +0000 UTC m=+0.160654004 container init ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:35:21 np0005531754 podman[154334]: 2025-11-22 05:35:21.140855495 +0000 UTC m=+0.173961025 container start ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:35:21 np0005531754 podman[154334]: 2025-11-22 05:35:21.148795925 +0000 UTC m=+0.181901445 container attach ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:35:21 np0005531754 python3.9[154383]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:35:21 np0005531754 systemd[1]: Reloading.
Nov 22 00:35:21 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:35:21 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:35:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]: {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    "0": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "devices": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "/dev/loop3"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            ],
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_name": "ceph_lv0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_size": "21470642176",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "name": "ceph_lv0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "tags": {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_name": "ceph",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.crush_device_class": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.encrypted": "0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_id": "0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.vdo": "0"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            },
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "vg_name": "ceph_vg0"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        }
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    ],
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    "1": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "devices": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "/dev/loop4"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            ],
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_name": "ceph_lv1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_size": "21470642176",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "name": "ceph_lv1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "tags": {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_name": "ceph",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.crush_device_class": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.encrypted": "0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_id": "1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.vdo": "0"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            },
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "vg_name": "ceph_vg1"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        }
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    ],
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    "2": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "devices": [
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "/dev/loop5"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            ],
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_name": "ceph_lv2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_size": "21470642176",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "name": "ceph_lv2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "tags": {
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.cluster_name": "ceph",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.crush_device_class": "",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.encrypted": "0",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osd_id": "2",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:                "ceph.vdo": "0"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            },
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "type": "block",
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:            "vg_name": "ceph_vg2"
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:        }
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]:    ]
Nov 22 00:35:21 np0005531754 stoic_hermann[154381]: }
Nov 22 00:35:22 np0005531754 systemd[1]: libpod-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope: Deactivated successfully.
Nov 22 00:35:22 np0005531754 podman[154334]: 2025-11-22 05:35:22.03065564 +0000 UTC m=+1.063761220 container died ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:35:22 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982-merged.mount: Deactivated successfully.
Nov 22 00:35:22 np0005531754 podman[154334]: 2025-11-22 05:35:22.108644772 +0000 UTC m=+1.141750302 container remove ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:35:22 np0005531754 systemd[1]: libpod-conmon-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope: Deactivated successfully.
Nov 22 00:35:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:22 np0005531754 python3.9[154502]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:35:22 np0005531754 systemd[1]: Reloading.
Nov 22 00:35:22 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:35:22 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:35:22 np0005531754 systemd[1]: Starting ovn_controller container...
Nov 22 00:35:22 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139338a45710c654aec62114b17db3a4fe19b4baee80acf159f7b3dd93a9a697/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:22 np0005531754 systemd[1]: Started /usr/bin/podman healthcheck run 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736.
Nov 22 00:35:22 np0005531754 podman[154656]: 2025-11-22 05:35:22.879461893 +0000 UTC m=+0.157044423 container init 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:35:22 np0005531754 ovn_controller[154671]: + sudo -E kolla_set_configs
Nov 22 00:35:22 np0005531754 podman[154656]: 2025-11-22 05:35:22.925348541 +0000 UTC m=+0.202931031 container start 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 22 00:35:22 np0005531754 edpm-start-podman-container[154656]: ovn_controller
Nov 22 00:35:22 np0005531754 systemd[1]: Created slice User Slice of UID 0.
Nov 22 00:35:22 np0005531754 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 22 00:35:23 np0005531754 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 22 00:35:23 np0005531754 systemd[1]: Starting User Manager for UID 0...
Nov 22 00:35:23 np0005531754 podman[154700]: 2025-11-22 05:35:23.02980012 +0000 UTC m=+0.087878618 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 00:35:23 np0005531754 edpm-start-podman-container[154655]: Creating additional drop-in dependency for "ovn_controller" (0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736)
Nov 22 00:35:23 np0005531754 systemd[1]: 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736-3df806f0648129ff.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 00:35:23 np0005531754 systemd[1]: 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736-3df806f0648129ff.service: Failed with result 'exit-code'.
Nov 22 00:35:23 np0005531754 systemd[1]: Reloading.
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.130279467 +0000 UTC m=+0.068899559 container create 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:35:23 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:35:23 np0005531754 systemd[154742]: Queued start job for default target Main User Target.
Nov 22 00:35:23 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.105911739 +0000 UTC m=+0.044531841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:23 np0005531754 systemd[154742]: Created slice User Application Slice.
Nov 22 00:35:23 np0005531754 systemd[154742]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 22 00:35:23 np0005531754 systemd[154742]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 00:35:23 np0005531754 systemd[154742]: Reached target Paths.
Nov 22 00:35:23 np0005531754 systemd[154742]: Reached target Timers.
Nov 22 00:35:23 np0005531754 systemd[154742]: Starting D-Bus User Message Bus Socket...
Nov 22 00:35:23 np0005531754 systemd[154742]: Starting Create User's Volatile Files and Directories...
Nov 22 00:35:23 np0005531754 systemd[154742]: Finished Create User's Volatile Files and Directories.
Nov 22 00:35:23 np0005531754 systemd[154742]: Listening on D-Bus User Message Bus Socket.
Nov 22 00:35:23 np0005531754 systemd[154742]: Reached target Sockets.
Nov 22 00:35:23 np0005531754 systemd[154742]: Reached target Basic System.
Nov 22 00:35:23 np0005531754 systemd[154742]: Reached target Main User Target.
Nov 22 00:35:23 np0005531754 systemd[154742]: Startup finished in 168ms.
Nov 22 00:35:23 np0005531754 systemd[1]: Started User Manager for UID 0.
Nov 22 00:35:23 np0005531754 systemd[1]: Started ovn_controller container.
Nov 22 00:35:23 np0005531754 systemd[1]: Started libpod-conmon-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope.
Nov 22 00:35:23 np0005531754 systemd[1]: Started Session c1 of User root.
Nov 22 00:35:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.449540617 +0000 UTC m=+0.388160709 container init 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.458799105 +0000 UTC m=+0.397419167 container start 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.462039314 +0000 UTC m=+0.400659416 container attach 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:35:23 np0005531754 ecstatic_hodgkin[154821]: 167 167
Nov 22 00:35:23 np0005531754 systemd[1]: libpod-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope: Deactivated successfully.
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.465898552 +0000 UTC m=+0.404518614 container died 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:35:23 np0005531754 systemd[1]: var-lib-containers-storage-overlay-23d419523812b08647e78574c2473d0c0a6fb05bf1db1a3b1b27510769109ac5-merged.mount: Deactivated successfully.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: INFO:__main__:Validating config file
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: INFO:__main__:Writing out command to execute
Nov 22 00:35:23 np0005531754 podman[154754]: 2025-11-22 05:35:23.502311456 +0000 UTC m=+0.440931518 container remove 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:35:23 np0005531754 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: ++ cat /run_command
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + ARGS=
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + sudo kolla_copy_cacerts
Nov 22 00:35:23 np0005531754 systemd[1]: libpod-conmon-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope: Deactivated successfully.
Nov 22 00:35:23 np0005531754 systemd[1]: Started Session c2 of User root.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + [[ ! -n '' ]]
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + . kolla_extend_start
Nov 22 00:35:23 np0005531754 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + umask 0022
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.5915] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.5922] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.5934] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.5941] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.5945] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 00:35:23 np0005531754 kernel: br-int: entered promiscuous mode
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 00:35:23 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.6251] manager: (ovn-4b7cc9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 22 00:35:23 np0005531754 kernel: genev_sys_6081: entered promiscuous mode
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.6449] device (genev_sys_6081): carrier: link connected
Nov 22 00:35:23 np0005531754 NetworkManager[49751]: <info>  [1763789723.6454] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 22 00:35:23 np0005531754 systemd-udevd[154904]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 00:35:23 np0005531754 systemd-udevd[154905]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 00:35:23 np0005531754 podman[154908]: 2025-11-22 05:35:23.711349296 +0000 UTC m=+0.052396910 container create e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 00:35:23 np0005531754 systemd[1]: Started libpod-conmon-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope.
Nov 22 00:35:23 np0005531754 podman[154908]: 2025-11-22 05:35:23.68453839 +0000 UTC m=+0.025586084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:35:23 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:35:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:23 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:35:23 np0005531754 podman[154908]: 2025-11-22 05:35:23.813446339 +0000 UTC m=+0.154493953 container init e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:35:23 np0005531754 podman[154908]: 2025-11-22 05:35:23.822747428 +0000 UTC m=+0.163795032 container start e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:35:23 np0005531754 podman[154908]: 2025-11-22 05:35:23.826350549 +0000 UTC m=+0.167398213 container attach e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:35:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:24 np0005531754 python3.9[155036]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:35:24 np0005531754 ovs-vsctl[155037]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]: {
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_id": 1,
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "type": "bluestore"
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    },
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_id": 2,
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "type": "bluestore"
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    },
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_id": 0,
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:        "type": "bluestore"
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]:    }
Nov 22 00:35:24 np0005531754 friendly_jackson[154956]: }
Nov 22 00:35:24 np0005531754 systemd[1]: libpod-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Deactivated successfully.
Nov 22 00:35:24 np0005531754 systemd[1]: libpod-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Consumed 1.110s CPU time.
Nov 22 00:35:24 np0005531754 podman[154908]: 2025-11-22 05:35:24.933870506 +0000 UTC m=+1.274918170 container died e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 00:35:24 np0005531754 systemd[1]: var-lib-containers-storage-overlay-301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467-merged.mount: Deactivated successfully.
Nov 22 00:35:25 np0005531754 podman[154908]: 2025-11-22 05:35:25.029991342 +0000 UTC m=+1.371038986 container remove e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:35:25 np0005531754 python3.9[155211]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:35:25 np0005531754 systemd[1]: libpod-conmon-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Deactivated successfully.
Nov 22 00:35:25 np0005531754 ovs-vsctl[155233]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 22 00:35:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:35:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:35:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e92560b4-9bae-469c-a50f-cc40de643a59 does not exist
Nov 22 00:35:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ddf92920-6fd5-47a4-8cc9-cb95b2a4c989 does not exist
Nov 22 00:35:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:35:26 np0005531754 python3.9[155436]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:35:26 np0005531754 ovs-vsctl[155437]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 22 00:35:26 np0005531754 systemd[1]: session-45.scope: Deactivated successfully.
Nov 22 00:35:26 np0005531754 systemd[1]: session-45.scope: Consumed 1min 4.188s CPU time.
Nov 22 00:35:26 np0005531754 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Nov 22 00:35:26 np0005531754 systemd-logind[798]: Removed session 45.
Nov 22 00:35:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:32 np0005531754 systemd-logind[798]: New session 47 of user zuul.
Nov 22 00:35:32 np0005531754 systemd[1]: Started Session 47 of User zuul.
Nov 22 00:35:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:33 np0005531754 python3.9[155615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:35:33 np0005531754 systemd[1]: Stopping User Manager for UID 0...
Nov 22 00:35:33 np0005531754 systemd[154742]: Activating special unit Exit the Session...
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped target Main User Target.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped target Basic System.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped target Paths.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped target Sockets.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped target Timers.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 00:35:33 np0005531754 systemd[154742]: Closed D-Bus User Message Bus Socket.
Nov 22 00:35:33 np0005531754 systemd[154742]: Stopped Create User's Volatile Files and Directories.
Nov 22 00:35:33 np0005531754 systemd[154742]: Removed slice User Application Slice.
Nov 22 00:35:33 np0005531754 systemd[154742]: Reached target Shutdown.
Nov 22 00:35:33 np0005531754 systemd[154742]: Finished Exit the Session.
Nov 22 00:35:33 np0005531754 systemd[154742]: Reached target Exit the Session.
Nov 22 00:35:33 np0005531754 systemd[1]: user@0.service: Deactivated successfully.
Nov 22 00:35:33 np0005531754 systemd[1]: Stopped User Manager for UID 0.
Nov 22 00:35:33 np0005531754 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 22 00:35:33 np0005531754 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 22 00:35:33 np0005531754 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 22 00:35:33 np0005531754 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 22 00:35:33 np0005531754 systemd[1]: Removed slice User Slice of UID 0.
Nov 22 00:35:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:34 np0005531754 python3.9[155773]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:35 np0005531754 python3.9[155925]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:35 np0005531754 python3.9[156077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:36 np0005531754 python3.9[156229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:36 np0005531754 auditd[704]: Audit daemon rotating log files
Nov 22 00:35:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:37 np0005531754 python3.9[156381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:38 np0005531754 python3.9[156531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:35:39 np0005531754 python3.9[156683]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 00:35:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:40 np0005531754 python3.9[156833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:41 np0005531754 python3.9[156954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789740.1331282-86-155888685922788/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:35:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 18.51 MB, 0.03 MB/s#012Interval WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 00:35:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:42 np0005531754 python3.9[157105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:43 np0005531754 python3.9[157226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789741.8837233-101-138408169660805/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:35:43
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:35:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:44 np0005531754 python3.9[157378]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:35:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:46 np0005531754 python3.9[157462]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:35:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:35:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 19.67 MB, 0.03 MB/s#012Interval WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 22 00:35:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:48 np0005531754 python3.9[157617]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:35:49 np0005531754 python3.9[157770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:50 np0005531754 python3.9[157891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789748.8178408-138-20842925345923/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:50 np0005531754 python3.9[158041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:51 np0005531754 python3.9[158162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789750.2469435-138-157367254458691/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:52 np0005531754 python3.9[158312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:35:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:35:53 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:53Z|00025|memory|INFO|16512 kB peak resident set size after 29.6 seconds
Nov 22 00:35:53 np0005531754 ovn_controller[154671]: 2025-11-22T05:35:53Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 22 00:35:53 np0005531754 podman[158407]: 2025-11-22 05:35:53.208928767 +0000 UTC m=+0.131533073 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 00:35:53 np0005531754 python3.9[158446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789752.1447866-182-218181689703308/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:35:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 18.55 MB, 0.03 MB/s#012Interval WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 00:35:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:54 np0005531754 python3.9[158607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:54 np0005531754 python3.9[158728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789753.5435038-182-133339693348897/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:55 np0005531754 python3.9[158878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:35:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:56 np0005531754 python3.9[159032]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:56 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 00:35:57 np0005531754 python3.9[159184]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:35:57 np0005531754 python3.9[159262]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:35:58 np0005531754 python3.9[159414]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:35:58 np0005531754 python3.9[159492]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:35:59 np0005531754 python3.9[159644]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:35:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:00 np0005531754 python3.9[159796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:01 np0005531754 python3.9[159874]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:01 np0005531754 python3.9[160026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:02 np0005531754 python3.9[160104]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:03 np0005531754 python3.9[160256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:03 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:03 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:03 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:04 np0005531754 python3.9[160444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:05 np0005531754 python3.9[160522]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:05 np0005531754 python3.9[160674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:06 np0005531754 python3.9[160752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:07 np0005531754 python3.9[160904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:07 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:07 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:07 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:07 np0005531754 systemd[1]: Starting Create netns directory...
Nov 22 00:36:07 np0005531754 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 00:36:07 np0005531754 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 00:36:07 np0005531754 systemd[1]: Finished Create netns directory.
Nov 22 00:36:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:08 np0005531754 python3.9[161098]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:36:09 np0005531754 python3.9[161250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:10 np0005531754 python3.9[161373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789768.856824-333-93780099256887/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:36:11 np0005531754 python3.9[161525]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:36:11 np0005531754 python3.9[161677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:36:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:12 np0005531754 python3.9[161800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789771.3650877-358-217053751564231/.source.json _original_basename=.uo6dppdf follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:13 np0005531754 python3.9[161952]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:16 np0005531754 python3.9[162379]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 22 00:36:17 np0005531754 python3.9[162531]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 00:36:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:18 np0005531754 python3.9[162683]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 00:36:19 np0005531754 python3[162863]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 00:36:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:27 np0005531754 podman[162942]: 2025-11-22 05:36:27.429081471 +0000 UTC m=+3.692549660 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:36:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:28 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 051145ba-8fbd-47d7-9b9a-0dc036859b9d does not exist
Nov 22 00:36:28 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8705bc8b-24fa-4c71-86ba-79481bc7ebcb does not exist
Nov 22 00:36:28 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1a9a401e-f18b-4205-8be5-48b4e1789e88 does not exist
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:36:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:36:29 np0005531754 podman[162877]: 2025-11-22 05:36:29.059026003 +0000 UTC m=+9.076510320 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 00:36:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:36:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:36:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:36:29 np0005531754 podman[163220]: 2025-11-22 05:36:29.223010835 +0000 UTC m=+0.063874425 container create 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 00:36:29 np0005531754 podman[163220]: 2025-11-22 05:36:29.186380941 +0000 UTC m=+0.027244601 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 00:36:29 np0005531754 python3[162863]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.755392049 +0000 UTC m=+0.053281408 container create 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:36:29 np0005531754 systemd[1]: Started libpod-conmon-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope.
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.735834348 +0000 UTC m=+0.033723737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:29 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.857615954 +0000 UTC m=+0.155505393 container init 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.869608779 +0000 UTC m=+0.167498178 container start 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.881647736 +0000 UTC m=+0.179537125 container attach 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:36:29 np0005531754 vigilant_bhaskara[163466]: 167 167
Nov 22 00:36:29 np0005531754 systemd[1]: libpod-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope: Deactivated successfully.
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.893768806 +0000 UTC m=+0.191658155 container died 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:36:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9c214f685565a46427460a62fd6f7c0bf6096346cd5da2258596dde3221415af-merged.mount: Deactivated successfully.
Nov 22 00:36:29 np0005531754 podman[163417]: 2025-11-22 05:36:29.956117808 +0000 UTC m=+0.254007197 container remove 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:36:29 np0005531754 systemd[1]: libpod-conmon-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope: Deactivated successfully.
Nov 22 00:36:30 np0005531754 podman[163536]: 2025-11-22 05:36:30.177398625 +0000 UTC m=+0.070007901 container create 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:36:30 np0005531754 python3.9[163528]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:36:30 np0005531754 systemd[1]: Started libpod-conmon-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope.
Nov 22 00:36:30 np0005531754 podman[163536]: 2025-11-22 05:36:30.145682145 +0000 UTC m=+0.038291471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:30 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:30 np0005531754 podman[163536]: 2025-11-22 05:36:30.284510264 +0000 UTC m=+0.177119600 container init 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:36:30 np0005531754 podman[163536]: 2025-11-22 05:36:30.297461185 +0000 UTC m=+0.190070471 container start 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:36:30 np0005531754 podman[163536]: 2025-11-22 05:36:30.301457604 +0000 UTC m=+0.194066880 container attach 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:36:31 np0005531754 python3.9[163711]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:31 np0005531754 wonderful_archimedes[163555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:36:31 np0005531754 wonderful_archimedes[163555]: --> relative data size: 1.0
Nov 22 00:36:31 np0005531754 wonderful_archimedes[163555]: --> All data devices are unavailable
Nov 22 00:36:31 np0005531754 systemd[1]: libpod-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Deactivated successfully.
Nov 22 00:36:31 np0005531754 podman[163536]: 2025-11-22 05:36:31.485715626 +0000 UTC m=+1.378324942 container died 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:36:31 np0005531754 systemd[1]: libpod-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Consumed 1.084s CPU time.
Nov 22 00:36:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay-2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe-merged.mount: Deactivated successfully.
Nov 22 00:36:31 np0005531754 podman[163536]: 2025-11-22 05:36:31.595691831 +0000 UTC m=+1.488301117 container remove 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:36:31 np0005531754 systemd[1]: libpod-conmon-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Deactivated successfully.
Nov 22 00:36:31 np0005531754 python3.9[163810]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:36:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.226189109 +0000 UTC m=+0.050938184 container create 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:36:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:32 np0005531754 systemd[1]: Started libpod-conmon-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope.
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.201200271 +0000 UTC m=+0.025949386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:32 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.321076485 +0000 UTC m=+0.145825610 container init 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.333014619 +0000 UTC m=+0.157763724 container start 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.336862204 +0000 UTC m=+0.161611299 container attach 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:36:32 np0005531754 youthful_ardinghelli[164133]: 167 167
Nov 22 00:36:32 np0005531754 systemd[1]: libpod-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope: Deactivated successfully.
Nov 22 00:36:32 np0005531754 conmon[164133]: conmon 943b83c16470d7bd04c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope/container/memory.events
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.342056974 +0000 UTC m=+0.166806069 container died 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:36:32 np0005531754 systemd[1]: var-lib-containers-storage-overlay-17f651efc525b9aace07ee2b32a35a429e0d93261306b5184fbae1d84a5e3436-merged.mount: Deactivated successfully.
Nov 22 00:36:32 np0005531754 podman[164099]: 2025-11-22 05:36:32.398306352 +0000 UTC m=+0.223055447 container remove 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:36:32 np0005531754 systemd[1]: libpod-conmon-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope: Deactivated successfully.
Nov 22 00:36:32 np0005531754 python3.9[164130]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789791.7180717-446-20564874618636/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:32 np0005531754 podman[164163]: 2025-11-22 05:36:32.611739866 +0000 UTC m=+0.068005018 container create 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:36:32 np0005531754 systemd[1]: Started libpod-conmon-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope.
Nov 22 00:36:32 np0005531754 podman[164163]: 2025-11-22 05:36:32.582525293 +0000 UTC m=+0.038790535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:32 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:32 np0005531754 podman[164163]: 2025-11-22 05:36:32.72938246 +0000 UTC m=+0.185647652 container init 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:36:32 np0005531754 podman[164163]: 2025-11-22 05:36:32.746185066 +0000 UTC m=+0.202450228 container start 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:36:32 np0005531754 podman[164163]: 2025-11-22 05:36:32.750309298 +0000 UTC m=+0.206574450 container attach 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:36:33 np0005531754 python3.9[164253]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:36:33 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:33 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:33 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]: {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    "0": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "devices": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "/dev/loop3"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            ],
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_name": "ceph_lv0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_size": "21470642176",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "name": "ceph_lv0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "tags": {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_name": "ceph",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.crush_device_class": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.encrypted": "0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_id": "0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.vdo": "0"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            },
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "vg_name": "ceph_vg0"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        }
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    ],
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    "1": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "devices": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "/dev/loop4"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            ],
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_name": "ceph_lv1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_size": "21470642176",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "name": "ceph_lv1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "tags": {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_name": "ceph",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.crush_device_class": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.encrypted": "0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_id": "1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.vdo": "0"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            },
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "vg_name": "ceph_vg1"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        }
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    ],
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    "2": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "devices": [
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "/dev/loop5"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            ],
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_name": "ceph_lv2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_size": "21470642176",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "name": "ceph_lv2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "tags": {
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.cluster_name": "ceph",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.crush_device_class": "",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.encrypted": "0",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osd_id": "2",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:                "ceph.vdo": "0"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            },
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "type": "block",
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:            "vg_name": "ceph_vg2"
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:        }
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]:    ]
Nov 22 00:36:33 np0005531754 eager_ganguly[164220]: }
Nov 22 00:36:33 np0005531754 systemd[1]: libpod-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope: Deactivated successfully.
Nov 22 00:36:33 np0005531754 podman[164163]: 2025-11-22 05:36:33.492019405 +0000 UTC m=+0.948284527 container died 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:36:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71-merged.mount: Deactivated successfully.
Nov 22 00:36:33 np0005531754 podman[164163]: 2025-11-22 05:36:33.554443029 +0000 UTC m=+1.010708161 container remove 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:36:33 np0005531754 systemd[1]: libpod-conmon-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope: Deactivated successfully.
Nov 22 00:36:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:34 np0005531754 python3.9[164453]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:34 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:34 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:34 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.382026518 +0000 UTC m=+0.027674942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.475912096 +0000 UTC m=+0.121560440 container create e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:36:34 np0005531754 systemd[1]: Started libpod-conmon-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope.
Nov 22 00:36:34 np0005531754 systemd[1]: Starting ovn_metadata_agent container...
Nov 22 00:36:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.690112442 +0000 UTC m=+0.335760856 container init e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.701883442 +0000 UTC m=+0.347531806 container start e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.706644871 +0000 UTC m=+0.352293245 container attach e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:36:34 np0005531754 xenodochial_babbage[164577]: 167 167
Nov 22 00:36:34 np0005531754 systemd[1]: libpod-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope: Deactivated successfully.
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.713367434 +0000 UTC m=+0.359015828 container died e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:36:34 np0005531754 systemd[1]: var-lib-containers-storage-overlay-58b651d861bd338607d7b2c8c64cb741e1c878f0251a60e870c4e8bc4855a770-merged.mount: Deactivated successfully.
Nov 22 00:36:34 np0005531754 podman[164558]: 2025-11-22 05:36:34.780932958 +0000 UTC m=+0.426581342 container remove e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:36:34 np0005531754 systemd[1]: libpod-conmon-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope: Deactivated successfully.
Nov 22 00:36:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e95c6c87860fb545097cf97bf3b7c73122bf952ef49bebf99f3689a8c83d8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e95c6c87860fb545097cf97bf3b7c73122bf952ef49bebf99f3689a8c83d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:34 np0005531754 systemd[1]: Started /usr/bin/podman healthcheck run 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c.
Nov 22 00:36:34 np0005531754 podman[164581]: 2025-11-22 05:36:34.896869485 +0000 UTC m=+0.296324336 container init 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 00:36:34 np0005531754 ovn_metadata_agent[164613]: + sudo -E kolla_set_configs
Nov 22 00:36:34 np0005531754 podman[164581]: 2025-11-22 05:36:34.93905137 +0000 UTC m=+0.338506161 container start 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 00:36:34 np0005531754 edpm-start-podman-container[164581]: ovn_metadata_agent
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Validating config file
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Copying service configuration files
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Writing out command to execute
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: ++ cat /run_command
Nov 22 00:36:35 np0005531754 podman[164627]: 2025-11-22 05:36:35.035753906 +0000 UTC m=+0.066804085 container create af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + CMD=neutron-ovn-metadata-agent
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + ARGS=
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + sudo kolla_copy_cacerts
Nov 22 00:36:35 np0005531754 edpm-start-podman-container[164579]: Creating additional drop-in dependency for "ovn_metadata_agent" (0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c)
Nov 22 00:36:35 np0005531754 podman[164621]: 2025-11-22 05:36:35.048264615 +0000 UTC m=+0.098203037 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + [[ ! -n '' ]]
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + . kolla_extend_start
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: Running command: 'neutron-ovn-metadata-agent'
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + umask 0022
Nov 22 00:36:35 np0005531754 ovn_metadata_agent[164613]: + exec neutron-ovn-metadata-agent
Nov 22 00:36:35 np0005531754 systemd[1]: Started libpod-conmon-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope.
Nov 22 00:36:35 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:35 np0005531754 podman[164627]: 2025-11-22 05:36:35.010357986 +0000 UTC m=+0.041408175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:36:35 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:35 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:35 np0005531754 systemd[1]: Started ovn_metadata_agent container.
Nov 22 00:36:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:36:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:36:35 np0005531754 podman[164627]: 2025-11-22 05:36:35.412803022 +0000 UTC m=+0.443853191 container init af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:36:35 np0005531754 podman[164627]: 2025-11-22 05:36:35.427703197 +0000 UTC m=+0.458753346 container start af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:36:35 np0005531754 podman[164627]: 2025-11-22 05:36:35.43186073 +0000 UTC m=+0.462910969 container attach af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:36:35 np0005531754 systemd[1]: session-47.scope: Deactivated successfully.
Nov 22 00:36:35 np0005531754 systemd[1]: session-47.scope: Consumed 1min 1.435s CPU time.
Nov 22 00:36:35 np0005531754 systemd-logind[798]: Session 47 logged out. Waiting for processes to exit.
Nov 22 00:36:35 np0005531754 systemd-logind[798]: Removed session 47.
Nov 22 00:36:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]: {
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_id": 1,
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "type": "bluestore"
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    },
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_id": 2,
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "type": "bluestore"
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    },
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_id": 0,
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:        "type": "bluestore"
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]:    }
Nov 22 00:36:36 np0005531754 silly_engelbart[164687]: }
Nov 22 00:36:36 np0005531754 systemd[1]: libpod-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Deactivated successfully.
Nov 22 00:36:36 np0005531754 podman[164627]: 2025-11-22 05:36:36.453164848 +0000 UTC m=+1.484215027 container died af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:36:36 np0005531754 systemd[1]: libpod-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Consumed 1.027s CPU time.
Nov 22 00:36:36 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1-merged.mount: Deactivated successfully.
Nov 22 00:36:36 np0005531754 podman[164627]: 2025-11-22 05:36:36.536066968 +0000 UTC m=+1.567117147 container remove af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:36:36 np0005531754 systemd[1]: libpod-conmon-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Deactivated successfully.
Nov 22 00:36:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:36:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:36:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:36 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3c5e8a40-8183-4283-8b9d-415ca50000a4 does not exist
Nov 22 00:36:36 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 96bae96a-793e-45c8-90cb-941347e27b22 does not exist
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.898 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.915 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.915 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.916 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.916 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.917 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 22 00:36:36 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.939 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 772af8e6-0f26-443e-a044-9109439e729d (UUID: 772af8e6-0f26-443e-a044-9109439e729d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.975 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.978 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.984 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.989 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '772af8e6-0f26-443e-a044-9109439e729d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa14a62baf0>], external_ids={}, name=772af8e6-0f26-443e-a044-9109439e729d, nb_cfg_timestamp=1763789731621, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.991 164618 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa14a62fb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.991 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 22 00:36:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.997 164618 DEBUG oslo_service.service [-] Started child 164844 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.000 164618 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp5_2ddpk5/privsep.sock']#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.001 164844 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-953242'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.031 164844 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.031 164844 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.032 164844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.036 164844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.047 164844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.056 164844 INFO eventlet.wsgi.server [-] (164844) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 22 00:36:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:37 np0005531754 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 22 00:36:37 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:37 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.673 164618 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.674 164618 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5_2ddpk5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.538 164849 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.543 164849 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.545 164849 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.546 164849 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164849#033[00m
Nov 22 00:36:37 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.678 164849 DEBUG oslo.privsep.daemon [-] privsep: reply[4adfa083-ec3e-4a39-a723-52f67de2216f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 00:36:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.697 164849 DEBUG oslo.privsep.daemon [-] privsep: reply[07917ce7-9923-4d40-80e8-e61a7fb0de57]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.701 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, column=external_ids, values=({'neutron:ovn-metadata-id': 'd37bcddf-b93a-5e8c-a505-2020a426b129'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.714 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:36:38 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 00:36:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:41 np0005531754 systemd-logind[798]: New session 48 of user zuul.
Nov 22 00:36:41 np0005531754 systemd[1]: Started Session 48 of User zuul.
Nov 22 00:36:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:42 np0005531754 python3.9[165007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:36:43
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'volumes', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:36:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:44 np0005531754 python3.9[165163]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:36:45 np0005531754 python3.9[165328]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:36:45 np0005531754 systemd[1]: Reloading.
Nov 22 00:36:45 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:36:45 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:36:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:46 np0005531754 python3.9[165513]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:36:46 np0005531754 network[165530]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:36:46 np0005531754 network[165531]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:36:46 np0005531754 network[165532]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:36:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:51 np0005531754 python3.9[165794]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:52 np0005531754 python3.9[165947]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:36:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:36:53 np0005531754 python3.9[166100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:54 np0005531754 python3.9[166253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:55 np0005531754 python3.9[166406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:56 np0005531754 python3.9[166559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:57 np0005531754 python3.9[166712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:36:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:36:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:36:58 np0005531754 python3.9[166865]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:58 np0005531754 podman[166989]: 2025-11-22 05:36:58.794035583 +0000 UTC m=+0.120063320 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 22 00:36:58 np0005531754 python3.9[167037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:59 np0005531754 python3.9[167195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:36:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:00 np0005531754 python3.9[167347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:01 np0005531754 python3.9[167499]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:02 np0005531754 python3.9[167651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:03 np0005531754 python3.9[167803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:03 np0005531754 python3.9[167955]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:04 np0005531754 python3.9[168107]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:05 np0005531754 podman[168231]: 2025-11-22 05:37:05.261971232 +0000 UTC m=+0.112103045 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 00:37:05 np0005531754 python3.9[168279]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:06 np0005531754 python3.9[168431]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:06 np0005531754 python3.9[168583]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:07 np0005531754 python3.9[168735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:08 np0005531754 python3.9[168887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:37:09 np0005531754 python3.9[169039]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:10 np0005531754 python3.9[169191]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:37:11 np0005531754 python3.9[169343]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:37:11 np0005531754 systemd[1]: Reloading.
Nov 22 00:37:11 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:37:11 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:37:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:12 np0005531754 python3.9[169530]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:12 np0005531754 python3.9[169683]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:13 np0005531754 python3.9[169836]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:14 np0005531754 python3.9[169989]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:15 np0005531754 python3.9[170142]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 22 00:37:16 np0005531754 python3.9[170295]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:16 np0005531754 python3.9[170448]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:37:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 00:37:18 np0005531754 python3.9[170601]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 22 00:37:18 np0005531754 python3.9[170754]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:37:19 np0005531754 python3.9[170912]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 00:37:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 00:37:20 np0005531754 python3.9[171072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:37:21 np0005531754 python3.9[171156]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:37:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:37:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:37:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:37:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 00:37:29 np0005531754 podman[171167]: 2025-11-22 05:37:29.251030895 +0000 UTC m=+0.105136625 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:37:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 00:37:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 00:37:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:36 np0005531754 podman[171365]: 2025-11-22 05:37:36.222599806 +0000 UTC m=+0.072267977 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:37:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.899 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:37:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:37:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:37:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:37 np0005531754 podman[171559]: 2025-11-22 05:37:37.681645401 +0000 UTC m=+0.096651793 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:37:37 np0005531754 podman[171559]: 2025-11-22 05:37:37.769810972 +0000 UTC m=+0.184817294 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:37:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:37:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:37:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f3f03c33-a34e-4aa7-a034-2fccd98c53fb does not exist
Nov 22 00:37:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1bdb6bbc-c31c-45d9-be98-c5fd335178bf does not exist
Nov 22 00:37:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 18b947d5-6b73-4a60-aeb1-dfc607c3a9c6 does not exist
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:37:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:37:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.213917625 +0000 UTC m=+0.072149153 container create 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:37:40 np0005531754 systemd[1]: Started libpod-conmon-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope.
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.186122375 +0000 UTC m=+0.044353943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.309828987 +0000 UTC m=+0.168060585 container init 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.321188147 +0000 UTC m=+0.179419635 container start 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.32458819 +0000 UTC m=+0.182819718 container attach 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:40 np0005531754 interesting_lamport[172007]: 167 167
Nov 22 00:37:40 np0005531754 systemd[1]: libpod-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope: Deactivated successfully.
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.331292523 +0000 UTC m=+0.189524061 container died 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:37:40 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cb3cebe32bbe5a7eef2d8f07686f8201b0ed749b2f6dff6869a5ab474fa8aff1-merged.mount: Deactivated successfully.
Nov 22 00:37:40 np0005531754 podman[171989]: 2025-11-22 05:37:40.390877643 +0000 UTC m=+0.249109141 container remove 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 00:37:40 np0005531754 systemd[1]: libpod-conmon-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope: Deactivated successfully.
Nov 22 00:37:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:37:40 np0005531754 podman[172031]: 2025-11-22 05:37:40.649879553 +0000 UTC m=+0.069505821 container create 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:37:40 np0005531754 systemd[1]: Started libpod-conmon-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope.
Nov 22 00:37:40 np0005531754 podman[172031]: 2025-11-22 05:37:40.622972537 +0000 UTC m=+0.042598865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:40 np0005531754 podman[172031]: 2025-11-22 05:37:40.771149337 +0000 UTC m=+0.190775645 container init 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:37:40 np0005531754 podman[172031]: 2025-11-22 05:37:40.789396166 +0000 UTC m=+0.209022424 container start 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:37:40 np0005531754 podman[172031]: 2025-11-22 05:37:40.793453097 +0000 UTC m=+0.213079425 container attach 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:37:41 np0005531754 vibrant_chatelet[172049]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:37:41 np0005531754 vibrant_chatelet[172049]: --> relative data size: 1.0
Nov 22 00:37:41 np0005531754 vibrant_chatelet[172049]: --> All data devices are unavailable
Nov 22 00:37:41 np0005531754 systemd[1]: libpod-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Deactivated successfully.
Nov 22 00:37:41 np0005531754 systemd[1]: libpod-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Consumed 1.045s CPU time.
Nov 22 00:37:41 np0005531754 podman[172031]: 2025-11-22 05:37:41.896986014 +0000 UTC m=+1.316612292 container died 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:37:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197-merged.mount: Deactivated successfully.
Nov 22 00:37:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:41 np0005531754 podman[172031]: 2025-11-22 05:37:41.997643516 +0000 UTC m=+1.417269774 container remove 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:37:42 np0005531754 systemd[1]: libpod-conmon-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Deactivated successfully.
Nov 22 00:37:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:42 np0005531754 podman[172237]: 2025-11-22 05:37:42.911316713 +0000 UTC m=+0.061865822 container create d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 00:37:42 np0005531754 systemd[1]: Started libpod-conmon-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope.
Nov 22 00:37:42 np0005531754 podman[172237]: 2025-11-22 05:37:42.888052977 +0000 UTC m=+0.038602086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:42 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:43 np0005531754 podman[172237]: 2025-11-22 05:37:43.007096181 +0000 UTC m=+0.157645350 container init d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:37:43 np0005531754 podman[172237]: 2025-11-22 05:37:43.019294734 +0000 UTC m=+0.169843833 container start d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:37:43 np0005531754 podman[172237]: 2025-11-22 05:37:43.023338355 +0000 UTC m=+0.173887474 container attach d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:37:43 np0005531754 systemd[1]: libpod-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope: Deactivated successfully.
Nov 22 00:37:43 np0005531754 stoic_blackburn[172253]: 167 167
Nov 22 00:37:43 np0005531754 conmon[172253]: conmon d65a4a01d945cc4b2865 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope/container/memory.events
Nov 22 00:37:43 np0005531754 podman[172237]: 2025-11-22 05:37:43.028100955 +0000 UTC m=+0.178650054 container died d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:37:43 np0005531754 systemd[1]: var-lib-containers-storage-overlay-bba2544b685943f6eece6f45be3ef33d641e46f29fcd3e1fb61d2b7d8db1fe03-merged.mount: Deactivated successfully.
Nov 22 00:37:43 np0005531754 podman[172237]: 2025-11-22 05:37:43.086786539 +0000 UTC m=+0.237335618 container remove d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:37:43 np0005531754 systemd[1]: libpod-conmon-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope: Deactivated successfully.
Nov 22 00:37:43 np0005531754 podman[172277]: 2025-11-22 05:37:43.289199653 +0000 UTC m=+0.065539463 container create 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:37:43 np0005531754 systemd[1]: Started libpod-conmon-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope.
Nov 22 00:37:43 np0005531754 podman[172277]: 2025-11-22 05:37:43.261197257 +0000 UTC m=+0.037537167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:43 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:43 np0005531754 podman[172277]: 2025-11-22 05:37:43.403362913 +0000 UTC m=+0.179702763 container init 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:37:43 np0005531754 podman[172277]: 2025-11-22 05:37:43.409768899 +0000 UTC m=+0.186108739 container start 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:37:43 np0005531754 podman[172277]: 2025-11-22 05:37:43.413058508 +0000 UTC m=+0.189398348 container attach 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:37:43
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:37:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]: {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    "0": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "devices": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "/dev/loop3"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            ],
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_name": "ceph_lv0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_size": "21470642176",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "name": "ceph_lv0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "tags": {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_name": "ceph",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.crush_device_class": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.encrypted": "0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_id": "0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.vdo": "0"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            },
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "vg_name": "ceph_vg0"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        }
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    ],
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    "1": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "devices": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "/dev/loop4"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            ],
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_name": "ceph_lv1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_size": "21470642176",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "name": "ceph_lv1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "tags": {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_name": "ceph",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.crush_device_class": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.encrypted": "0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_id": "1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.vdo": "0"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            },
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "vg_name": "ceph_vg1"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        }
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    ],
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    "2": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "devices": [
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "/dev/loop5"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            ],
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_name": "ceph_lv2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_size": "21470642176",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "name": "ceph_lv2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "tags": {
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.cluster_name": "ceph",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.crush_device_class": "",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.encrypted": "0",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osd_id": "2",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:                "ceph.vdo": "0"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            },
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "type": "block",
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:            "vg_name": "ceph_vg2"
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:        }
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]:    ]
Nov 22 00:37:44 np0005531754 relaxed_easley[172294]: }
Nov 22 00:37:44 np0005531754 systemd[1]: libpod-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope: Deactivated successfully.
Nov 22 00:37:44 np0005531754 podman[172277]: 2025-11-22 05:37:44.230419492 +0000 UTC m=+1.006759352 container died 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:37:44 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579-merged.mount: Deactivated successfully.
Nov 22 00:37:44 np0005531754 podman[172277]: 2025-11-22 05:37:44.313997387 +0000 UTC m=+1.090337247 container remove 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:44 np0005531754 systemd[1]: libpod-conmon-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope: Deactivated successfully.
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.095376408 +0000 UTC m=+0.067792025 container create 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:37:45 np0005531754 systemd[1]: Started libpod-conmon-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope.
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.067026272 +0000 UTC m=+0.039441939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.197264822 +0000 UTC m=+0.169680479 container init 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.208302794 +0000 UTC m=+0.180718421 container start 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.213569208 +0000 UTC m=+0.185984885 container attach 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:37:45 np0005531754 hardcore_brattain[172474]: 167 167
Nov 22 00:37:45 np0005531754 systemd[1]: libpod-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope: Deactivated successfully.
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.215529571 +0000 UTC m=+0.187945198 container died 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:37:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f1b883df1b55ef166ab83b946c9e452777c5185b1d28e2a7079eb04a92ba4499-merged.mount: Deactivated successfully.
Nov 22 00:37:45 np0005531754 podman[172459]: 2025-11-22 05:37:45.278253076 +0000 UTC m=+0.250668703 container remove 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:37:45 np0005531754 systemd[1]: libpod-conmon-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope: Deactivated successfully.
Nov 22 00:37:45 np0005531754 podman[172497]: 2025-11-22 05:37:45.524786365 +0000 UTC m=+0.065370687 container create ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:37:45 np0005531754 systemd[1]: Started libpod-conmon-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope.
Nov 22 00:37:45 np0005531754 podman[172497]: 2025-11-22 05:37:45.495848915 +0000 UTC m=+0.036433307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:37:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:37:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:37:45 np0005531754 podman[172497]: 2025-11-22 05:37:45.656519886 +0000 UTC m=+0.197104228 container init ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:37:45 np0005531754 podman[172497]: 2025-11-22 05:37:45.668565366 +0000 UTC m=+0.209149688 container start ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:37:45 np0005531754 podman[172497]: 2025-11-22 05:37:45.673068149 +0000 UTC m=+0.213652531 container attach ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:37:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:46 np0005531754 bold_allen[172513]: {
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_id": 1,
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "type": "bluestore"
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    },
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_id": 2,
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "type": "bluestore"
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    },
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_id": 0,
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:37:46 np0005531754 bold_allen[172513]:        "type": "bluestore"
Nov 22 00:37:46 np0005531754 bold_allen[172513]:    }
Nov 22 00:37:46 np0005531754 bold_allen[172513]: }
Nov 22 00:37:46 np0005531754 systemd[1]: libpod-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Deactivated successfully.
Nov 22 00:37:46 np0005531754 podman[172497]: 2025-11-22 05:37:46.684411197 +0000 UTC m=+1.224995509 container died ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:37:46 np0005531754 systemd[1]: libpod-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Consumed 1.012s CPU time.
Nov 22 00:37:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de-merged.mount: Deactivated successfully.
Nov 22 00:37:46 np0005531754 podman[172497]: 2025-11-22 05:37:46.753607768 +0000 UTC m=+1.294192070 container remove ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:37:46 np0005531754 systemd[1]: libpod-conmon-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Deactivated successfully.
Nov 22 00:37:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:37:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:37:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:46 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 98c4f8c4-5601-4fc6-b7d6-a32a1f35d0b0 does not exist
Nov 22 00:37:46 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 7d48396e-931c-49f4-b817-6e7c6278b7ef does not exist
Nov 22 00:37:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:37:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:50 np0005531754 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:37:50 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:37:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:37:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:37:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:37:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:37:59 np0005531754 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:37:59 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:37:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:00 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 22 00:38:00 np0005531754 podman[172624]: 2025-11-22 05:38:00.282397849 +0000 UTC m=+0.129377977 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:38:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:07 np0005531754 podman[172650]: 2025-11-22 05:38:07.19400519 +0000 UTC m=+0.058183622 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:38:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.496056) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895496119, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3509508, "memory_usage": 3568896, "flush_reason": "Manual Compaction"}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895524407, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3434386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9727, "largest_seqno": 11762, "table_properties": {"data_size": 3425132, "index_size": 5876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17719, "raw_average_key_size": 19, "raw_value_size": 3406829, "raw_average_value_size": 3735, "num_data_blocks": 267, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789662, "oldest_key_time": 1763789662, "file_creation_time": 1763789895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 28490 microseconds, and 12068 cpu microseconds.
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.524551) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3434386 bytes OK
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.524582) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.526917) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.526980) EVENT_LOG_v1 {"time_micros": 1763789895526969, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.527009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3501029, prev total WAL file size 3501029, number of live WAL files 2.
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.528717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3353KB)], [26(5905KB)]
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895528796, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9481137, "oldest_snapshot_seqno": -1}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3702 keys, 7789203 bytes, temperature: kUnknown
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895595411, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7789203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7760907, "index_size": 17946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88958, "raw_average_key_size": 24, "raw_value_size": 7690577, "raw_average_value_size": 2077, "num_data_blocks": 776, "num_entries": 3702, "num_filter_entries": 3702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.596141) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7789203 bytes
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.598979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.3 rd, 116.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 4216, records dropped: 514 output_compression: NoCompression
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.599022) EVENT_LOG_v1 {"time_micros": 1763789895599002, "job": 10, "event": "compaction_finished", "compaction_time_micros": 67082, "compaction_time_cpu_micros": 33075, "output_level": 6, "num_output_files": 1, "total_output_size": 7789203, "num_input_records": 4216, "num_output_records": 3702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895601334, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895603684, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.528589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:38:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:31 np0005531754 podman[182232]: 2025-11-22 05:38:31.245102122 +0000 UTC m=+0.097240521 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 00:38:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:38:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:38:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:38:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:38 np0005531754 podman[185583]: 2025-11-22 05:38:38.222209991 +0000 UTC m=+0.077975699 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 00:38:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:38:43
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.control']
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:38:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 79c0eb9e-2ffe-4af8-9d3f-2088851068fc does not exist
Nov 22 00:38:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 808801b8-29a4-4b3e-84b2-e7e0fdda8e3f does not exist
Nov 22 00:38:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 50373c44-20a6-4e98-b88c-1929a24b1280 does not exist
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.753169644 +0000 UTC m=+0.033040128 container create eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:48 np0005531754 systemd[1]: Started libpod-conmon-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope.
Nov 22 00:38:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.739752787 +0000 UTC m=+0.019623271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.84574924 +0000 UTC m=+0.125619734 container init eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.8540364 +0000 UTC m=+0.133906924 container start eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 00:38:48 np0005531754 gallant_lamport[189795]: 167 167
Nov 22 00:38:48 np0005531754 systemd[1]: libpod-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope: Deactivated successfully.
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.860141422 +0000 UTC m=+0.140011916 container attach eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.860600823 +0000 UTC m=+0.140471317 container died eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:38:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f1b82b2023b18d981ad166553d7a1a36ed2a5344df143f190b90c47b5a8f9463-merged.mount: Deactivated successfully.
Nov 22 00:38:48 np0005531754 podman[189779]: 2025-11-22 05:38:48.925757033 +0000 UTC m=+0.205627547 container remove eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:38:48 np0005531754 systemd[1]: libpod-conmon-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope: Deactivated successfully.
Nov 22 00:38:49 np0005531754 podman[189819]: 2025-11-22 05:38:49.220095922 +0000 UTC m=+0.103030224 container create ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 00:38:49 np0005531754 podman[189819]: 2025-11-22 05:38:49.16159309 +0000 UTC m=+0.044527442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:49 np0005531754 systemd[1]: Started libpod-conmon-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope.
Nov 22 00:38:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:49 np0005531754 podman[189819]: 2025-11-22 05:38:49.316054758 +0000 UTC m=+0.198989030 container init ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:38:49 np0005531754 podman[189819]: 2025-11-22 05:38:49.32255681 +0000 UTC m=+0.205491092 container start ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:38:49 np0005531754 podman[189819]: 2025-11-22 05:38:49.326465234 +0000 UTC m=+0.209399516 container attach ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:38:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:50 np0005531754 zen_chaum[189835]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:38:50 np0005531754 zen_chaum[189835]: --> relative data size: 1.0
Nov 22 00:38:50 np0005531754 zen_chaum[189835]: --> All data devices are unavailable
Nov 22 00:38:50 np0005531754 systemd[1]: libpod-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Deactivated successfully.
Nov 22 00:38:50 np0005531754 systemd[1]: libpod-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Consumed 1.023s CPU time.
Nov 22 00:38:50 np0005531754 podman[189819]: 2025-11-22 05:38:50.402057182 +0000 UTC m=+1.284991524 container died ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:50 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b-merged.mount: Deactivated successfully.
Nov 22 00:38:50 np0005531754 podman[189819]: 2025-11-22 05:38:50.482124147 +0000 UTC m=+1.365058409 container remove ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:38:50 np0005531754 systemd[1]: libpod-conmon-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Deactivated successfully.
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.258004983 +0000 UTC m=+0.051324913 container create 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:38:51 np0005531754 systemd[1]: Started libpod-conmon-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope.
Nov 22 00:38:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.237890309 +0000 UTC m=+0.031210269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.350329443 +0000 UTC m=+0.143649383 container init 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.357649387 +0000 UTC m=+0.150969337 container start 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:38:51 np0005531754 gifted_darwin[190030]: 167 167
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.362655949 +0000 UTC m=+0.155975889 container attach 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:38:51 np0005531754 systemd[1]: libpod-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope: Deactivated successfully.
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.36758603 +0000 UTC m=+0.160905970 container died 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:38:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-216b2e1a86618ec1b1fe3d42b35fef515b835a768dcf055582a3fd790bdfea5d-merged.mount: Deactivated successfully.
Nov 22 00:38:51 np0005531754 podman[190014]: 2025-11-22 05:38:51.422014035 +0000 UTC m=+0.215333995 container remove 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:38:51 np0005531754 systemd[1]: libpod-conmon-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope: Deactivated successfully.
Nov 22 00:38:51 np0005531754 podman[190054]: 2025-11-22 05:38:51.679885846 +0000 UTC m=+0.064784159 container create d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:38:51 np0005531754 systemd[1]: Started libpod-conmon-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope.
Nov 22 00:38:51 np0005531754 podman[190054]: 2025-11-22 05:38:51.654844752 +0000 UTC m=+0.039743055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:51 np0005531754 podman[190054]: 2025-11-22 05:38:51.796749028 +0000 UTC m=+0.181647381 container init d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:38:51 np0005531754 podman[190054]: 2025-11-22 05:38:51.808106769 +0000 UTC m=+0.193005082 container start d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:38:51 np0005531754 podman[190054]: 2025-11-22 05:38:51.81268453 +0000 UTC m=+0.197582883 container attach d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:38:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]: {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    "0": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "devices": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "/dev/loop3"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            ],
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_name": "ceph_lv0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_size": "21470642176",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "name": "ceph_lv0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "tags": {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_name": "ceph",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.crush_device_class": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.encrypted": "0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_id": "0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.vdo": "0"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            },
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "vg_name": "ceph_vg0"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        }
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    ],
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    "1": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "devices": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "/dev/loop4"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            ],
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_name": "ceph_lv1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_size": "21470642176",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "name": "ceph_lv1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "tags": {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_name": "ceph",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.crush_device_class": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.encrypted": "0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_id": "1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.vdo": "0"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            },
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "vg_name": "ceph_vg1"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        }
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    ],
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    "2": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "devices": [
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "/dev/loop5"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            ],
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_name": "ceph_lv2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_size": "21470642176",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "name": "ceph_lv2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "tags": {
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.cluster_name": "ceph",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.crush_device_class": "",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.encrypted": "0",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osd_id": "2",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:                "ceph.vdo": "0"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            },
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "type": "block",
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:            "vg_name": "ceph_vg2"
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:        }
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]:    ]
Nov 22 00:38:52 np0005531754 funny_chatelet[190071]: }
Nov 22 00:38:52 np0005531754 systemd[1]: libpod-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope: Deactivated successfully.
Nov 22 00:38:52 np0005531754 podman[190054]: 2025-11-22 05:38:52.63619675 +0000 UTC m=+1.021095063 container died d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:38:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:38:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7-merged.mount: Deactivated successfully.
Nov 22 00:38:52 np0005531754 podman[190054]: 2025-11-22 05:38:52.877982545 +0000 UTC m=+1.262880828 container remove d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:38:52 np0005531754 systemd[1]: libpod-conmon-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope: Deactivated successfully.
Nov 22 00:38:53 np0005531754 podman[190232]: 2025-11-22 05:38:53.68663937 +0000 UTC m=+0.065036746 container create 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:38:53 np0005531754 systemd[1]: Started libpod-conmon-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope.
Nov 22 00:38:53 np0005531754 podman[190232]: 2025-11-22 05:38:53.660010484 +0000 UTC m=+0.038407920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:53 np0005531754 podman[190232]: 2025-11-22 05:38:53.783782588 +0000 UTC m=+0.162180024 container init 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:38:53 np0005531754 podman[190232]: 2025-11-22 05:38:53.78987408 +0000 UTC m=+0.168271466 container start 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:38:53 np0005531754 podman[190232]: 2025-11-22 05:38:53.793718502 +0000 UTC m=+0.172115888 container attach 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:38:53 np0005531754 gracious_golick[190249]: 167 167
Nov 22 00:38:53 np0005531754 systemd[1]: libpod-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope: Deactivated successfully.
Nov 22 00:38:53 np0005531754 podman[190254]: 2025-11-22 05:38:53.841231113 +0000 UTC m=+0.029402721 container died 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8fcacc5891067ec4aa0261990da4f36bb23fe9bcd8234e9e9036c93d72577640-merged.mount: Deactivated successfully.
Nov 22 00:38:53 np0005531754 podman[190254]: 2025-11-22 05:38:53.890664954 +0000 UTC m=+0.078836562 container remove 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:38:53 np0005531754 systemd[1]: libpod-conmon-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope: Deactivated successfully.
Nov 22 00:38:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:54 np0005531754 podman[190276]: 2025-11-22 05:38:54.114123713 +0000 UTC m=+0.039493679 container create b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:38:54 np0005531754 systemd[1]: Started libpod-conmon-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope.
Nov 22 00:38:54 np0005531754 podman[190276]: 2025-11-22 05:38:54.096027983 +0000 UTC m=+0.021397999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:38:54 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:38:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:38:54 np0005531754 podman[190276]: 2025-11-22 05:38:54.23159694 +0000 UTC m=+0.156966926 container init b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:38:54 np0005531754 podman[190276]: 2025-11-22 05:38:54.243307371 +0000 UTC m=+0.168677367 container start b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:54 np0005531754 podman[190276]: 2025-11-22 05:38:54.247982804 +0000 UTC m=+0.173352800 container attach b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]: {
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_id": 1,
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "type": "bluestore"
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    },
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_id": 2,
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "type": "bluestore"
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    },
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_id": 0,
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:        "type": "bluestore"
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]:    }
Nov 22 00:38:55 np0005531754 objective_mahavira[190293]: }
Nov 22 00:38:55 np0005531754 systemd[1]: libpod-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Deactivated successfully.
Nov 22 00:38:55 np0005531754 podman[190276]: 2025-11-22 05:38:55.350411035 +0000 UTC m=+1.275781051 container died b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:38:55 np0005531754 systemd[1]: libpod-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Consumed 1.112s CPU time.
Nov 22 00:38:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339-merged.mount: Deactivated successfully.
Nov 22 00:38:55 np0005531754 podman[190276]: 2025-11-22 05:38:55.438351178 +0000 UTC m=+1.363721184 container remove b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:38:55 np0005531754 systemd[1]: libpod-conmon-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Deactivated successfully.
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:55 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c5e97d09-6ec3-4210-b55e-7dc06dde093f does not exist
Nov 22 00:38:55 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6dc0bf76-fb4e-4516-942d-d5a5d68f1bde does not exist
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:38:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:38:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:38:58 np0005531754 kernel: SELinux:  Converting 2770 SID table entries...
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability open_perms=1
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability always_check_network=0
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 00:38:58 np0005531754 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 00:38:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:00 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:39:00 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 22 00:39:00 np0005531754 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 00:39:01 np0005531754 podman[190430]: 2025-11-22 05:39:01.396941784 +0000 UTC m=+0.110263877 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 22 00:39:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:09 np0005531754 podman[191204]: 2025-11-22 05:39:09.221252061 +0000 UTC m=+0.079025167 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 00:39:09 np0005531754 systemd[1]: Stopping OpenSSH server daemon...
Nov 22 00:39:09 np0005531754 systemd[1]: sshd.service: Deactivated successfully.
Nov 22 00:39:09 np0005531754 systemd[1]: Stopped OpenSSH server daemon.
Nov 22 00:39:09 np0005531754 systemd[1]: sshd.service: Consumed 19.560s CPU time, read 32.0K from disk, written 124.0K to disk.
Nov 22 00:39:09 np0005531754 systemd[1]: Stopped target sshd-keygen.target.
Nov 22 00:39:09 np0005531754 systemd[1]: Stopping sshd-keygen.target...
Nov 22 00:39:09 np0005531754 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 00:39:09 np0005531754 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 00:39:09 np0005531754 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 00:39:09 np0005531754 systemd[1]: Reached target sshd-keygen.target.
Nov 22 00:39:09 np0005531754 systemd[1]: Starting OpenSSH server daemon...
Nov 22 00:39:09 np0005531754 systemd[1]: Started OpenSSH server daemon.
Nov 22 00:39:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:12 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:39:12 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:39:12 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:12 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:12 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:12 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:17 np0005531754 python3.9[195715]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:39:17 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:17 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:17 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:18 np0005531754 python3.9[196800]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:39:18 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:18 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:18 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:19 np0005531754 python3.9[198039]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:39:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:20 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:20 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:20 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:21 np0005531754 python3.9[199233]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:39:21 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:21 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:21 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:22 np0005531754 python3.9[200413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:22 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:22 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:22 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:22 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:39:22 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:39:22 np0005531754 systemd[1]: man-db-cache-update.service: Consumed 13.204s CPU time.
Nov 22 00:39:22 np0005531754 systemd[1]: run-rda1824e87d764b0fac846f34253abce5.service: Deactivated successfully.
Nov 22 00:39:23 np0005531754 python3.9[201042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:23 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:24 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:24 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:25 np0005531754 python3.9[201232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:25 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:25 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:25 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:26 np0005531754 python3.9[201422]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:27 np0005531754 python3.9[201577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:27 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:27 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:27 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:28 np0005531754 python3.9[201767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 00:39:28 np0005531754 systemd[1]: Reloading.
Nov 22 00:39:28 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:39:28 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:39:29 np0005531754 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 22 00:39:29 np0005531754 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 22 00:39:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:30 np0005531754 python3.9[201960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:31 np0005531754 python3.9[202115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:31 np0005531754 podman[202242]: 2025-11-22 05:39:31.790168796 +0000 UTC m=+0.118253818 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 00:39:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:32 np0005531754 python3.9[202289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:34 np0005531754 python3.9[202449]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:35 np0005531754 python3.9[202604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:35 np0005531754 python3.9[202759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:36 np0005531754 python3.9[202914]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:39:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.902 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:39:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.902 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:39:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:37 np0005531754 python3.9[203069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:38 np0005531754 python3.9[203224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:39 np0005531754 python3.9[203379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:39 np0005531754 podman[203381]: 2025-11-22 05:39:39.553633853 +0000 UTC m=+0.083888305 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 00:39:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:41 np0005531754 python3.9[203553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:42 np0005531754 python3.9[203708]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:39:43
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'images']
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:39:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:39:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:44 np0005531754 python3.9[203863]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:45 np0005531754 python3.9[204018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 00:39:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:46 np0005531754 python3.9[204173]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:47 np0005531754 python3.9[204325]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:48 np0005531754 python3.9[204477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:49 np0005531754 python3.9[204629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:50 np0005531754 python3.9[204781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:52 np0005531754 python3.9[204933]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:39:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:39:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:39:53 np0005531754 python3.9[205085]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:39:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:54 np0005531754 python3.9[205210]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789992.4186914-554-69885207164514/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:39:54 np0005531754 python3.9[205362]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:39:55 np0005531754 python3.9[205487]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789994.3001108-554-182333866704513/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:39:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:56 np0005531754 python3.9[205754]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:39:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 87800006-9d50-4a91-a01d-143edbad4855 does not exist
Nov 22 00:39:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4d3a473f-79ee-45f8-8767-8e938461cfa4 does not exist
Nov 22 00:39:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e115d2b2-7d23-415f-aea5-49bdc1b47c6a does not exist
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:39:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:39:57 np0005531754 python3.9[205969]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789995.9056485-554-209219980484898/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:39:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.558628703 +0000 UTC m=+0.076003553 container create 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:39:57 np0005531754 systemd[1]: Started libpod-conmon-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope.
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.524746003 +0000 UTC m=+0.042120903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:39:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.684106614 +0000 UTC m=+0.201481524 container init 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:39:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:39:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:39:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.693023913 +0000 UTC m=+0.210398763 container start 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.698185152 +0000 UTC m=+0.215559992 container attach 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:39:57 np0005531754 sharp_tesla[206152]: 167 167
Nov 22 00:39:57 np0005531754 systemd[1]: libpod-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope: Deactivated successfully.
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.70178989 +0000 UTC m=+0.219164710 container died 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:39:57 np0005531754 systemd[1]: var-lib-containers-storage-overlay-65130f7d2423741b9c440d12774ba1310ee9fcdd93ec899fc6eb991d72aaeca0-merged.mount: Deactivated successfully.
Nov 22 00:39:57 np0005531754 podman[206102]: 2025-11-22 05:39:57.756121949 +0000 UTC m=+0.273496769 container remove 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:39:57 np0005531754 systemd[1]: libpod-conmon-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope: Deactivated successfully.
Nov 22 00:39:57 np0005531754 podman[206228]: 2025-11-22 05:39:57.955022233 +0000 UTC m=+0.052470881 container create 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:39:58 np0005531754 systemd[1]: Started libpod-conmon-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope.
Nov 22 00:39:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:39:58 np0005531754 podman[206228]: 2025-11-22 05:39:57.937176563 +0000 UTC m=+0.034625231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:39:58 np0005531754 python3.9[206222]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:39:58 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:39:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:39:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:39:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:39:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:39:58 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:39:58 np0005531754 podman[206228]: 2025-11-22 05:39:58.063351443 +0000 UTC m=+0.160800101 container init 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:39:58 np0005531754 podman[206228]: 2025-11-22 05:39:58.077630757 +0000 UTC m=+0.175079415 container start 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:39:58 np0005531754 podman[206228]: 2025-11-22 05:39:58.085706874 +0000 UTC m=+0.183155542 container attach 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:39:58 np0005531754 python3.9[206373]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789997.4145343-554-275584828976283/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:39:59 np0005531754 strange_carson[206244]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:39:59 np0005531754 strange_carson[206244]: --> relative data size: 1.0
Nov 22 00:39:59 np0005531754 strange_carson[206244]: --> All data devices are unavailable
Nov 22 00:39:59 np0005531754 systemd[1]: libpod-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Deactivated successfully.
Nov 22 00:39:59 np0005531754 podman[206228]: 2025-11-22 05:39:59.265981874 +0000 UTC m=+1.363430542 container died 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:39:59 np0005531754 systemd[1]: libpod-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Consumed 1.102s CPU time.
Nov 22 00:39:59 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e-merged.mount: Deactivated successfully.
Nov 22 00:39:59 np0005531754 podman[206228]: 2025-11-22 05:39:59.347219386 +0000 UTC m=+1.444668024 container remove 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:39:59 np0005531754 systemd[1]: libpod-conmon-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Deactivated successfully.
Nov 22 00:39:59 np0005531754 python3.9[206584]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.121226061 +0000 UTC m=+0.070279009 container create 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:40:00 np0005531754 systemd[1]: Started libpod-conmon-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope.
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.094330878 +0000 UTC m=+0.043383877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:40:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.214675821 +0000 UTC m=+0.163728839 container init 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.22541792 +0000 UTC m=+0.174470848 container start 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.229260563 +0000 UTC m=+0.178313531 container attach 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:40:00 np0005531754 systemd[1]: libpod-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope: Deactivated successfully.
Nov 22 00:40:00 np0005531754 awesome_ellis[206842]: 167 167
Nov 22 00:40:00 np0005531754 conmon[206842]: conmon 4828c08248a96a2ca63e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope/container/memory.events
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.232954923 +0000 UTC m=+0.182007881 container died 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:40:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4cd3111d375082a87dba417ba142cce0ad385fda20380928c6c27e7ceb6a7a2b-merged.mount: Deactivated successfully.
Nov 22 00:40:00 np0005531754 podman[206782]: 2025-11-22 05:40:00.28721565 +0000 UTC m=+0.236268578 container remove 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:40:00 np0005531754 systemd[1]: libpod-conmon-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope: Deactivated successfully.
Nov 22 00:40:00 np0005531754 python3.9[206846]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789999.0140114-554-10862158167396/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:00 np0005531754 podman[206867]: 2025-11-22 05:40:00.504044306 +0000 UTC m=+0.049443179 container create c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:40:00 np0005531754 systemd[1]: Started libpod-conmon-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope.
Nov 22 00:40:00 np0005531754 podman[206867]: 2025-11-22 05:40:00.477669847 +0000 UTC m=+0.023068800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:40:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:40:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:00 np0005531754 podman[206867]: 2025-11-22 05:40:00.626587069 +0000 UTC m=+0.171985972 container init c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:40:00 np0005531754 podman[206867]: 2025-11-22 05:40:00.640068 +0000 UTC m=+0.185466893 container start c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:40:00 np0005531754 podman[206867]: 2025-11-22 05:40:00.645499606 +0000 UTC m=+0.190898519 container attach c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:40:01 np0005531754 python3.9[207039]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]: {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    "0": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "devices": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "/dev/loop3"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            ],
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_name": "ceph_lv0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_size": "21470642176",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "name": "ceph_lv0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "tags": {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_name": "ceph",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.crush_device_class": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.encrypted": "0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_id": "0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.vdo": "0"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            },
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "vg_name": "ceph_vg0"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        }
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    ],
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    "1": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "devices": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "/dev/loop4"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            ],
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_name": "ceph_lv1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_size": "21470642176",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "name": "ceph_lv1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "tags": {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_name": "ceph",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.crush_device_class": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.encrypted": "0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_id": "1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.vdo": "0"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            },
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "vg_name": "ceph_vg1"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        }
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    ],
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    "2": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "devices": [
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "/dev/loop5"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            ],
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_name": "ceph_lv2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_size": "21470642176",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "name": "ceph_lv2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "tags": {
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.cluster_name": "ceph",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.crush_device_class": "",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.encrypted": "0",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osd_id": "2",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:                "ceph.vdo": "0"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            },
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "type": "block",
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:            "vg_name": "ceph_vg2"
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:        }
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]:    ]
Nov 22 00:40:01 np0005531754 adoring_sanderson[206907]: }
Nov 22 00:40:01 np0005531754 systemd[1]: libpod-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope: Deactivated successfully.
Nov 22 00:40:01 np0005531754 podman[206867]: 2025-11-22 05:40:01.427465365 +0000 UTC m=+0.972864268 container died c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:40:01 np0005531754 systemd[1]: var-lib-containers-storage-overlay-64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98-merged.mount: Deactivated successfully.
Nov 22 00:40:01 np0005531754 podman[206867]: 2025-11-22 05:40:01.502566872 +0000 UTC m=+1.047965775 container remove c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:40:01 np0005531754 systemd[1]: libpod-conmon-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope: Deactivated successfully.
Nov 22 00:40:02 np0005531754 podman[207282]: 2025-11-22 05:40:02.000779318 +0000 UTC m=+0.115897385 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 00:40:02 np0005531754 python3.9[207259]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790000.6305976-554-248235703689124/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.229345868 +0000 UTC m=+0.044541287 container create 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:40:02 np0005531754 systemd[1]: Started libpod-conmon-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope.
Nov 22 00:40:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.207137471 +0000 UTC m=+0.022332950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.30828371 +0000 UTC m=+0.123479229 container init 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.320930779 +0000 UTC m=+0.136126228 container start 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.324821044 +0000 UTC m=+0.140016513 container attach 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:40:02 np0005531754 interesting_hertz[207438]: 167 167
Nov 22 00:40:02 np0005531754 systemd[1]: libpod-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope: Deactivated successfully.
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.330200748 +0000 UTC m=+0.145396207 container died 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 00:40:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-64144b63db691be2d585bb1921ee1fec574680a4ce6a86448f2c385db5273688-merged.mount: Deactivated successfully.
Nov 22 00:40:02 np0005531754 podman[207393]: 2025-11-22 05:40:02.380187251 +0000 UTC m=+0.195382710 container remove 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:40:02 np0005531754 systemd[1]: libpod-conmon-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope: Deactivated successfully.
Nov 22 00:40:02 np0005531754 podman[207536]: 2025-11-22 05:40:02.628731479 +0000 UTC m=+0.065291325 container create c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:40:02 np0005531754 systemd[1]: Started libpod-conmon-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope.
Nov 22 00:40:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:40:02 np0005531754 podman[207536]: 2025-11-22 05:40:02.603229293 +0000 UTC m=+0.039789209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:40:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:40:02 np0005531754 python3.9[207531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:02 np0005531754 podman[207536]: 2025-11-22 05:40:02.719817426 +0000 UTC m=+0.156377282 container init c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:40:02 np0005531754 podman[207536]: 2025-11-22 05:40:02.734936271 +0000 UTC m=+0.171496147 container start c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:40:02 np0005531754 podman[207536]: 2025-11-22 05:40:02.739761732 +0000 UTC m=+0.176328438 container attach c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:40:03 np0005531754 python3.9[207679]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790002.1776078-554-159655940205728/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]: {
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_id": 1,
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "type": "bluestore"
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    },
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_id": 2,
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "type": "bluestore"
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    },
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_id": 0,
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:        "type": "bluestore"
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]:    }
Nov 22 00:40:03 np0005531754 pensive_joliot[207552]: }
Nov 22 00:40:03 np0005531754 systemd[1]: libpod-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Deactivated successfully.
Nov 22 00:40:03 np0005531754 systemd[1]: libpod-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Consumed 1.080s CPU time.
Nov 22 00:40:03 np0005531754 podman[207536]: 2025-11-22 05:40:03.806182533 +0000 UTC m=+1.242742399 container died c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:40:03 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9-merged.mount: Deactivated successfully.
Nov 22 00:40:03 np0005531754 podman[207536]: 2025-11-22 05:40:03.879284407 +0000 UTC m=+1.315844253 container remove c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:40:03 np0005531754 systemd[1]: libpod-conmon-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Deactivated successfully.
Nov 22 00:40:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:40:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:40:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:40:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:40:03 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d9a61498-a88f-4f7e-bfb8-6b72853ce2b4 does not exist
Nov 22 00:40:03 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ca27401a-48bd-4194-96c5-48a7c9f7cac1 does not exist
Nov 22 00:40:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:04 np0005531754 python3.9[207870]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:04 np0005531754 python3.9[208045]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790003.5771909-554-115549058670793/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:40:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:40:05 np0005531754 python3.9[208197]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 22 00:40:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:06 np0005531754 python3.9[208350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:07 np0005531754 python3.9[208502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:08 np0005531754 python3.9[208654]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:08 np0005531754 python3.9[208806]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:09 np0005531754 python3.9[208958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:10 np0005531754 podman[209082]: 2025-11-22 05:40:10.145266613 +0000 UTC m=+0.053035086 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 00:40:10 np0005531754 python3.9[209130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:11 np0005531754 python3.9[209282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:11 np0005531754 python3.9[209434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:12 np0005531754 python3.9[209586]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:13 np0005531754 python3.9[209738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:14 np0005531754 python3.9[209890]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:15 np0005531754 python3.9[210042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:15 np0005531754 python3.9[210194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:16 np0005531754 python3.9[210346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:17 np0005531754 python3.9[210498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:17 np0005531754 python3.9[210621]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790016.7447383-775-222023302326255/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:18 np0005531754 python3.9[210773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:19 np0005531754 python3.9[210896]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790018.1939106-775-8314261775271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:20 np0005531754 python3.9[211048]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:20 np0005531754 python3.9[211171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790019.6675162-775-28106747771147/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:21 np0005531754 python3.9[211323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:22 np0005531754 python3.9[211446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790021.0868194-775-231388770830619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:23 np0005531754 python3.9[211598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:23 np0005531754 python3.9[211721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790022.6134028-775-228461831331911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:24 np0005531754 python3.9[211873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:25 np0005531754 python3.9[211996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790024.1103923-775-85823424669249/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:26 np0005531754 python3.9[212148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:26 np0005531754 python3.9[212271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790025.633697-775-91129063595278/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:27 np0005531754 python3.9[212423]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:28 np0005531754 python3.9[212546]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790027.0972097-775-49060545345133/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:29 np0005531754 python3.9[212698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:29 np0005531754 python3.9[212821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790028.5408368-775-216727225126009/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:30 np0005531754 python3.9[212973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:31 np0005531754 python3.9[213096]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790029.9407368-775-26006056738333/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:31 np0005531754 python3.9[213248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:32 np0005531754 podman[213317]: 2025-11-22 05:40:32.28168766 +0000 UTC m=+0.123158950 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:40:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:32 np0005531754 python3.9[213397]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790031.3101478-775-8046351819222/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:33 np0005531754 python3.9[213549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:34 np0005531754 python3.9[213672]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790032.735274-775-174350781137421/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:34 np0005531754 python3.9[213824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:35 np0005531754 python3.9[213947]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790034.2511065-775-24247054273734/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:36 np0005531754 python3.9[214099]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:40:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.903 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:40:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.903 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:40:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.904 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:40:37 np0005531754 python3.9[214222]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790035.7544074-775-170775090634160/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:38 np0005531754 python3.9[214372]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:40:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:39 np0005531754 python3.9[214527]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 22 00:40:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:40 np0005531754 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 22 00:40:41 np0005531754 podman[214655]: 2025-11-22 05:40:41.031872926 +0000 UTC m=+0.080768961 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:40:41 np0005531754 python3.9[214703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:41 np0005531754 python3.9[214855]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:42 np0005531754 python3.9[215007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:43 np0005531754 python3.9[215159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:40:43
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes']
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:40:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:40:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:44 np0005531754 python3.9[215311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:45 np0005531754 python3.9[215463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:46 np0005531754 python3.9[215615]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:46 np0005531754 python3.9[215767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:47 np0005531754 python3.9[215919]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:48 np0005531754 python3.9[216071]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:49 np0005531754 python3.9[216223]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:40:49 np0005531754 systemd[1]: Reloading.
Nov 22 00:40:49 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:40:49 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:40:49 np0005531754 systemd[1]: Starting libvirt logging daemon socket...
Nov 22 00:40:49 np0005531754 systemd[1]: Listening on libvirt logging daemon socket.
Nov 22 00:40:49 np0005531754 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 22 00:40:49 np0005531754 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 22 00:40:49 np0005531754 systemd[1]: Starting libvirt logging daemon...
Nov 22 00:40:49 np0005531754 systemd[1]: Started libvirt logging daemon.
Nov 22 00:40:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:50 np0005531754 python3.9[216416]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:40:50 np0005531754 systemd[1]: Reloading.
Nov 22 00:40:51 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:40:51 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:40:51 np0005531754 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 22 00:40:51 np0005531754 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 22 00:40:51 np0005531754 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 22 00:40:51 np0005531754 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 22 00:40:51 np0005531754 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 22 00:40:51 np0005531754 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 22 00:40:51 np0005531754 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 22 00:40:51 np0005531754 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 00:40:51 np0005531754 systemd[1]: Started libvirt nodedev daemon.
Nov 22 00:40:51 np0005531754 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 22 00:40:51 np0005531754 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 22 00:40:51 np0005531754 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:52 np0005531754 python3.9[216640]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:40:52 np0005531754 systemd[1]: Reloading.
Nov 22 00:40:52 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:40:52 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:40:52 np0005531754 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 22 00:40:52 np0005531754 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 22 00:40:52 np0005531754 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 22 00:40:52 np0005531754 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 22 00:40:52 np0005531754 systemd[1]: Starting libvirt proxy daemon...
Nov 22 00:40:52 np0005531754 systemd[1]: Started libvirt proxy daemon.
Nov 22 00:40:52 np0005531754 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4af265fd-1c59-42ae-8de8-c99c06f445ef
Nov 22 00:40:52 np0005531754 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 22 00:40:52 np0005531754 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4af265fd-1c59-42ae-8de8-c99c06f445ef
Nov 22 00:40:52 np0005531754 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:40:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:40:53 np0005531754 python3.9[216853]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:40:53 np0005531754 systemd[1]: Reloading.
Nov 22 00:40:53 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:40:53 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:40:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:54 np0005531754 systemd[1]: Listening on libvirt locking daemon socket.
Nov 22 00:40:54 np0005531754 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 22 00:40:54 np0005531754 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 22 00:40:54 np0005531754 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 22 00:40:54 np0005531754 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 22 00:40:54 np0005531754 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 22 00:40:54 np0005531754 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 22 00:40:54 np0005531754 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 22 00:40:54 np0005531754 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 22 00:40:54 np0005531754 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 22 00:40:54 np0005531754 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 00:40:54 np0005531754 systemd[1]: Started libvirt QEMU daemon.
Nov 22 00:40:55 np0005531754 python3.9[217068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:40:55 np0005531754 systemd[1]: Reloading.
Nov 22 00:40:55 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:40:55 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:40:55 np0005531754 systemd[1]: Starting libvirt secret daemon socket...
Nov 22 00:40:55 np0005531754 systemd[1]: Listening on libvirt secret daemon socket.
Nov 22 00:40:55 np0005531754 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 22 00:40:55 np0005531754 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 22 00:40:55 np0005531754 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 22 00:40:55 np0005531754 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 22 00:40:55 np0005531754 systemd[1]: Starting libvirt secret daemon...
Nov 22 00:40:55 np0005531754 systemd[1]: Started libvirt secret daemon.
Nov 22 00:40:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:56 np0005531754 python3.9[217281]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:40:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:40:57 np0005531754 python3.9[217433]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:40:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:40:58 np0005531754 python3.9[217585]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:40:59 np0005531754 python3.9[217739]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:41:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:00 np0005531754 python3.9[217889]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:00 np0005531754 python3.9[218010]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790059.6562026-1133-189131636320962/.source.xml follow=False _original_basename=secret.xml.j2 checksum=5662cc1bfbb8c37741b42345b876b94b094e15c0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:01 np0005531754 python3.9[218162]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 13fdadc6-d566-5465-9ac8-a148ef130da1#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:02 np0005531754 podman[218298]: 2025-11-22 05:41:02.472423724 +0000 UTC m=+0.116939233 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:41:02 np0005531754 python3.9[218342]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:02 np0005531754 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 22 00:41:02 np0005531754 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 22 00:41:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4cc3c8d1-af6e-4da9-8c97-8be3d5384614 does not exist
Nov 22 00:41:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 62e7be59-e2f7-4506-accd-332bbfc7673c does not exist
Nov 22 00:41:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 154b944e-b53f-484b-b313-3793d743b992 does not exist
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:41:05 np0005531754 python3.9[218945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.80373185 +0000 UTC m=+0.046383216 container create f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:41:05 np0005531754 systemd[1]: Started libpod-conmon-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope.
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.781303123 +0000 UTC m=+0.023954489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.901036829 +0000 UTC m=+0.143688225 container init f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.909975947 +0000 UTC m=+0.152627293 container start f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.913600363 +0000 UTC m=+0.156251789 container attach f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:41:05 np0005531754 confident_bardeen[219254]: 167 167
Nov 22 00:41:05 np0005531754 systemd[1]: libpod-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope: Deactivated successfully.
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.918772981 +0000 UTC m=+0.161424337 container died f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:41:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8f12749f5ec03b8961b78fa3d59516650021300ba4fac59d74cfdf674cdba5fe-merged.mount: Deactivated successfully.
Nov 22 00:41:05 np0005531754 podman[219203]: 2025-11-22 05:41:05.965629909 +0000 UTC m=+0.208281265 container remove f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:41:05 np0005531754 systemd[1]: libpod-conmon-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope: Deactivated successfully.
Nov 22 00:41:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:06 np0005531754 python3.9[219256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:06 np0005531754 podman[219278]: 2025-11-22 05:41:06.177178209 +0000 UTC m=+0.059964227 container create 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:41:06 np0005531754 systemd[1]: Started libpod-conmon-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope.
Nov 22 00:41:06 np0005531754 podman[219278]: 2025-11-22 05:41:06.147913 +0000 UTC m=+0.030699058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:06 np0005531754 podman[219278]: 2025-11-22 05:41:06.279862131 +0000 UTC m=+0.162648159 container init 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:41:06 np0005531754 podman[219278]: 2025-11-22 05:41:06.293804812 +0000 UTC m=+0.176590800 container start 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:41:06 np0005531754 podman[219278]: 2025-11-22 05:41:06.297940502 +0000 UTC m=+0.180726490 container attach 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:41:06 np0005531754 python3.9[219421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790065.459752-1188-181472067933015/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:07 np0005531754 suspicious_pasteur[219320]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:41:07 np0005531754 suspicious_pasteur[219320]: --> relative data size: 1.0
Nov 22 00:41:07 np0005531754 suspicious_pasteur[219320]: --> All data devices are unavailable
Nov 22 00:41:07 np0005531754 systemd[1]: libpod-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Deactivated successfully.
Nov 22 00:41:07 np0005531754 systemd[1]: libpod-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Consumed 1.075s CPU time.
Nov 22 00:41:07 np0005531754 podman[219278]: 2025-11-22 05:41:07.434994752 +0000 UTC m=+1.317780760 container died 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:41:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c-merged.mount: Deactivated successfully.
Nov 22 00:41:07 np0005531754 podman[219278]: 2025-11-22 05:41:07.516134452 +0000 UTC m=+1.398920470 container remove 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:41:07 np0005531754 systemd[1]: libpod-conmon-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Deactivated successfully.
Nov 22 00:41:07 np0005531754 python3.9[219608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.277369781 +0000 UTC m=+0.065302439 container create 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:41:08 np0005531754 systemd[1]: Started libpod-conmon-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope.
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.250456814 +0000 UTC m=+0.038389542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.372583245 +0000 UTC m=+0.160515903 container init 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.3840777 +0000 UTC m=+0.172010348 container start 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.388012485 +0000 UTC m=+0.175945173 container attach 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:41:08 np0005531754 quizzical_hoover[219919]: 167 167
Nov 22 00:41:08 np0005531754 systemd[1]: libpod-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope: Deactivated successfully.
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.393352657 +0000 UTC m=+0.181285335 container died 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:41:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-09289beda1d720a2a225c01679d294b255b9d0a23ed9a68509ffa8125cfeb389-merged.mount: Deactivated successfully.
Nov 22 00:41:08 np0005531754 podman[219869]: 2025-11-22 05:41:08.448942576 +0000 UTC m=+0.236875264 container remove 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:41:08 np0005531754 systemd[1]: libpod-conmon-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope: Deactivated successfully.
Nov 22 00:41:08 np0005531754 python3.9[219921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:08 np0005531754 podman[219945]: 2025-11-22 05:41:08.708269308 +0000 UTC m=+0.070706513 container create 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:41:08 np0005531754 systemd[1]: Started libpod-conmon-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope.
Nov 22 00:41:08 np0005531754 podman[219945]: 2025-11-22 05:41:08.681932467 +0000 UTC m=+0.044369672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:08 np0005531754 podman[219945]: 2025-11-22 05:41:08.841582526 +0000 UTC m=+0.204019721 container init 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 00:41:08 np0005531754 podman[219945]: 2025-11-22 05:41:08.849837496 +0000 UTC m=+0.212274691 container start 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:41:08 np0005531754 podman[219945]: 2025-11-22 05:41:08.854176051 +0000 UTC m=+0.216613306 container attach 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 22 00:41:09 np0005531754 python3.9[220041]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]: {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    "0": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "devices": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "/dev/loop3"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            ],
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_name": "ceph_lv0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_size": "21470642176",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "name": "ceph_lv0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "tags": {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_name": "ceph",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.crush_device_class": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.encrypted": "0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_id": "0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.vdo": "0"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            },
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "vg_name": "ceph_vg0"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        }
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    ],
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    "1": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "devices": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "/dev/loop4"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            ],
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_name": "ceph_lv1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_size": "21470642176",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "name": "ceph_lv1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "tags": {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_name": "ceph",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.crush_device_class": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.encrypted": "0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_id": "1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.vdo": "0"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            },
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "vg_name": "ceph_vg1"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        }
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    ],
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    "2": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "devices": [
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "/dev/loop5"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            ],
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_name": "ceph_lv2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_size": "21470642176",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "name": "ceph_lv2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "tags": {
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.cluster_name": "ceph",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.crush_device_class": "",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.encrypted": "0",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osd_id": "2",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:                "ceph.vdo": "0"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            },
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "type": "block",
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:            "vg_name": "ceph_vg2"
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:        }
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]:    ]
Nov 22 00:41:09 np0005531754 wonderful_jones[220002]: }
Nov 22 00:41:09 np0005531754 systemd[1]: libpod-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope: Deactivated successfully.
Nov 22 00:41:09 np0005531754 podman[219945]: 2025-11-22 05:41:09.67103874 +0000 UTC m=+1.033475975 container died 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:41:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2-merged.mount: Deactivated successfully.
Nov 22 00:41:09 np0005531754 podman[219945]: 2025-11-22 05:41:09.756805842 +0000 UTC m=+1.119243017 container remove 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 00:41:09 np0005531754 systemd[1]: libpod-conmon-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope: Deactivated successfully.
Nov 22 00:41:10 np0005531754 python3.9[220209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:10 np0005531754 python3.9[220395]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o4lc69sp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.537383896 +0000 UTC m=+0.054604904 container create 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:41:10 np0005531754 systemd[1]: Started libpod-conmon-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope.
Nov 22 00:41:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.512301739 +0000 UTC m=+0.029522817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.621381012 +0000 UTC m=+0.138602060 container init 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.632339674 +0000 UTC m=+0.149560702 container start 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.636807642 +0000 UTC m=+0.154028660 container attach 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:41:10 np0005531754 dazzling_chebyshev[220469]: 167 167
Nov 22 00:41:10 np0005531754 systemd[1]: libpod-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope: Deactivated successfully.
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.638525868 +0000 UTC m=+0.155746926 container died 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:41:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7d367b3c00eb2434a68c3bc04adf875dfd3b45ed2a92d865f708854922e73357-merged.mount: Deactivated successfully.
Nov 22 00:41:10 np0005531754 podman[220428]: 2025-11-22 05:41:10.676731984 +0000 UTC m=+0.193952982 container remove 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:41:10 np0005531754 systemd[1]: libpod-conmon-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope: Deactivated successfully.
Nov 22 00:41:10 np0005531754 podman[220544]: 2025-11-22 05:41:10.851851945 +0000 UTC m=+0.048853272 container create 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:41:10 np0005531754 systemd[1]: Started libpod-conmon-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope.
Nov 22 00:41:10 np0005531754 podman[220544]: 2025-11-22 05:41:10.829461509 +0000 UTC m=+0.026462906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:41:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:41:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:41:10 np0005531754 podman[220544]: 2025-11-22 05:41:10.957794014 +0000 UTC m=+0.154795441 container init 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:41:10 np0005531754 podman[220544]: 2025-11-22 05:41:10.969352232 +0000 UTC m=+0.166353559 container start 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:41:10 np0005531754 podman[220544]: 2025-11-22 05:41:10.973900193 +0000 UTC m=+0.170901540 container attach 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:41:11 np0005531754 podman[220641]: 2025-11-22 05:41:11.188679959 +0000 UTC m=+0.089940174 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 00:41:11 np0005531754 python3.9[220642]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:11 np0005531754 python3.9[220740]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:12 np0005531754 magical_yalow[220587]: {
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_id": 1,
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "type": "bluestore"
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    },
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_id": 2,
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "type": "bluestore"
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    },
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_id": 0,
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:        "type": "bluestore"
Nov 22 00:41:12 np0005531754 magical_yalow[220587]:    }
Nov 22 00:41:12 np0005531754 magical_yalow[220587]: }
Nov 22 00:41:12 np0005531754 systemd[1]: libpod-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Deactivated successfully.
Nov 22 00:41:12 np0005531754 systemd[1]: libpod-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Consumed 1.139s CPU time.
Nov 22 00:41:12 np0005531754 podman[220544]: 2025-11-22 05:41:12.104151523 +0000 UTC m=+1.301152850 container died 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:41:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93-merged.mount: Deactivated successfully.
Nov 22 00:41:12 np0005531754 podman[220544]: 2025-11-22 05:41:12.187772948 +0000 UTC m=+1.384774285 container remove 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:41:12 np0005531754 systemd[1]: libpod-conmon-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Deactivated successfully.
Nov 22 00:41:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:41:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:41:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:12 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 0720206e-d79f-4e65-9ef4-4b85840d88b1 does not exist
Nov 22 00:41:12 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 995ddc2e-729f-4868-8572-cbb60fa31849 does not exist
Nov 22 00:41:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:12 np0005531754 python3.9[220983]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:41:13 np0005531754 python3[221136]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:14 np0005531754 python3.9[221288]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:15 np0005531754 python3.9[221366]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:15 np0005531754 python3.9[221518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:16 np0005531754 python3.9[221596]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:17 np0005531754 python3.9[221748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:17 np0005531754 python3.9[221826]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:18 np0005531754 python3.9[221978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:19 np0005531754 python3.9[222056]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:20 np0005531754 python3.9[222208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:20 np0005531754 python3.9[222333]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790079.7000976-1313-188355291129806/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:21 np0005531754 python3.9[222485]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:22 np0005531754 python3.9[222637]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:23 np0005531754 python3.9[222792]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:24 np0005531754 python3.9[222944]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:25 np0005531754 python3.9[223097]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:41:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:26 np0005531754 python3.9[223251]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:26 np0005531754 python3.9[223406]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:27 np0005531754 python3.9[223558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:28 np0005531754 python3.9[223681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790087.1766067-1385-213896729287548/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:29 np0005531754 python3.9[223833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:30 np0005531754 python3.9[223956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790088.694479-1400-154927054445206/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:30 np0005531754 python3.9[224108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:41:31 np0005531754 python3.9[224231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790090.3166318-1415-68787885280429/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:41:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:32 np0005531754 python3.9[224383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:41:32 np0005531754 systemd[1]: Reloading.
Nov 22 00:41:32 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:41:32 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:41:32 np0005531754 systemd[1]: Reached target edpm_libvirt.target.
Nov 22 00:41:32 np0005531754 podman[224421]: 2025-11-22 05:41:32.934568633 +0000 UTC m=+0.147404094 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 00:41:33 np0005531754 python3.9[224601]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 00:41:33 np0005531754 systemd[1]: Reloading.
Nov 22 00:41:33 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:41:33 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:41:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:34 np0005531754 systemd[1]: Reloading.
Nov 22 00:41:34 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:41:34 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:41:34 np0005531754 systemd[1]: session-48.scope: Deactivated successfully.
Nov 22 00:41:34 np0005531754 systemd[1]: session-48.scope: Consumed 3min 58.861s CPU time.
Nov 22 00:41:34 np0005531754 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Nov 22 00:41:35 np0005531754 systemd-logind[798]: Removed session 48.
Nov 22 00:41:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.904 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:41:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:41:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:41:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:40 np0005531754 systemd-logind[798]: New session 49 of user zuul.
Nov 22 00:41:40 np0005531754 systemd[1]: Started Session 49 of User zuul.
Nov 22 00:41:41 np0005531754 python3.9[224851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:41:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:42 np0005531754 podman[224899]: 2025-11-22 05:41:42.231039227 +0000 UTC m=+0.080233166 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 00:41:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:42 np0005531754 python3.9[225024]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:41:43 np0005531754 network[225041]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:41:43 np0005531754 network[225042]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:41:43 np0005531754 network[225043]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:41:43
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:41:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:41:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:50 np0005531754 python3.9[225315]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 00:41:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:51 np0005531754 python3.9[225400]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:41:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:41:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:41:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.309937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117309996, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1905, "num_deletes": 250, "total_data_size": 3251414, "memory_usage": 3291944, "flush_reason": "Manual Compaction"}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117325902, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1821187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11763, "largest_seqno": 13667, "table_properties": {"data_size": 1815071, "index_size": 3127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15249, "raw_average_key_size": 20, "raw_value_size": 1801540, "raw_average_value_size": 2376, "num_data_blocks": 145, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789896, "oldest_key_time": 1763789896, "file_creation_time": 1763790117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16027 microseconds, and 8498 cpu microseconds.
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.325962) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1821187 bytes OK
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.325986) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328137) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328157) EVENT_LOG_v1 {"time_micros": 1763790117328150, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3243421, prev total WAL file size 3243421, number of live WAL files 2.
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.329707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1778KB)], [29(7606KB)]
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117329784, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9610390, "oldest_snapshot_seqno": -1}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4053 keys, 7690750 bytes, temperature: kUnknown
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117392188, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7690750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7661728, "index_size": 17776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96309, "raw_average_key_size": 23, "raw_value_size": 7586801, "raw_average_value_size": 1871, "num_data_blocks": 773, "num_entries": 4053, "num_filter_entries": 4053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.392513) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7690750 bytes
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.394346) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 4460, records dropped: 407 output_compression: NoCompression
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.394376) EVENT_LOG_v1 {"time_micros": 1763790117394361, "job": 12, "event": "compaction_finished", "compaction_time_micros": 62488, "compaction_time_cpu_micros": 32347, "output_level": 6, "num_output_files": 1, "total_output_size": 7690750, "num_input_records": 4460, "num_output_records": 4053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117395071, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117397435, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.329589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:41:57 np0005531754 python3.9[225553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:41:58 np0005531754 python3.9[225705]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:41:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:41:59 np0005531754 python3.9[225858]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:42:00 np0005531754 python3.9[226010]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:42:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:01 np0005531754 python3.9[226163]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:02 np0005531754 python3.9[226286]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790120.5513635-95-120336329415970/.source.iscsi _original_basename=.ltc7p64w follow=False checksum=cb9ad46cd98a71044757bb18980699a4118db1db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:03 np0005531754 python3.9[226438]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:03 np0005531754 podman[226439]: 2025-11-22 05:42:03.27177979 +0000 UTC m=+0.124331593 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:42:04 np0005531754 python3.9[226617]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:05 np0005531754 python3.9[226769]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:42:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:06 np0005531754 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 22 00:42:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:07 np0005531754 python3.9[226925]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:42:07 np0005531754 systemd[1]: Reloading.
Nov 22 00:42:07 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:42:07 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:42:08 np0005531754 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 00:42:08 np0005531754 systemd[1]: Starting Open-iSCSI...
Nov 22 00:42:08 np0005531754 kernel: Loading iSCSI transport class v2.0-870.
Nov 22 00:42:08 np0005531754 systemd[1]: Started Open-iSCSI.
Nov 22 00:42:08 np0005531754 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 22 00:42:08 np0005531754 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 22 00:42:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:09 np0005531754 python3.9[227127]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:42:09 np0005531754 network[227144]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:42:09 np0005531754 network[227145]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:42:09 np0005531754 network[227146]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:42:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:12 np0005531754 podman[227217]: 2025-11-22 05:42:12.377344582 +0000 UTC m=+0.089246214 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 00:42:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5e53ff88-b331-4730-b8bd-1b0de6091f66 does not exist
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 55a6836b-6cf4-4742-8fa6-078f29e394bd does not exist
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 79bd95fc-17f9-4571-a2d8-ebfa0a6b6cb0 does not exist
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.058397811 +0000 UTC m=+0.072362056 container create db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:42:14 np0005531754 systemd[1]: Started libpod-conmon-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope.
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.026750834 +0000 UTC m=+0.040715179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.152574786 +0000 UTC m=+0.166539061 container init db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.164631654 +0000 UTC m=+0.178595949 container start db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.168751033 +0000 UTC m=+0.182715328 container attach db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:14 np0005531754 youthful_torvalds[227667]: 167 167
Nov 22 00:42:14 np0005531754 systemd[1]: libpod-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope: Deactivated successfully.
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.173708165 +0000 UTC m=+0.187672490 container died db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:42:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e8edc9258270854038842eec02ecad466bcb8d59e58cd9d70ebde449de837d0d-merged.mount: Deactivated successfully.
Nov 22 00:42:14 np0005531754 podman[227608]: 2025-11-22 05:42:14.226139153 +0000 UTC m=+0.240103448 container remove db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:42:14 np0005531754 systemd[1]: libpod-conmon-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope: Deactivated successfully.
Nov 22 00:42:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:14 np0005531754 podman[227749]: 2025-11-22 05:42:14.474018606 +0000 UTC m=+0.075322295 container create 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 00:42:14 np0005531754 systemd[1]: Started libpod-conmon-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope.
Nov 22 00:42:14 np0005531754 podman[227749]: 2025-11-22 05:42:14.444868134 +0000 UTC m=+0.046171863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:14 np0005531754 python3.9[227743]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 00:42:14 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:14 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:14 np0005531754 podman[227749]: 2025-11-22 05:42:14.599797306 +0000 UTC m=+0.201101065 container init 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:42:14 np0005531754 podman[227749]: 2025-11-22 05:42:14.607749567 +0000 UTC m=+0.209053266 container start 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 00:42:14 np0005531754 podman[227749]: 2025-11-22 05:42:14.616207711 +0000 UTC m=+0.217511400 container attach 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:42:15 np0005531754 python3.9[227932]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 22 00:42:15 np0005531754 bold_aryabhata[227766]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:42:15 np0005531754 bold_aryabhata[227766]: --> relative data size: 1.0
Nov 22 00:42:15 np0005531754 bold_aryabhata[227766]: --> All data devices are unavailable
Nov 22 00:42:15 np0005531754 systemd[1]: libpod-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Deactivated successfully.
Nov 22 00:42:15 np0005531754 systemd[1]: libpod-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Consumed 1.095s CPU time.
Nov 22 00:42:15 np0005531754 podman[227749]: 2025-11-22 05:42:15.763866858 +0000 UTC m=+1.365170557 container died 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:42:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696-merged.mount: Deactivated successfully.
Nov 22 00:42:15 np0005531754 podman[227749]: 2025-11-22 05:42:15.854177539 +0000 UTC m=+1.455481218 container remove 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:42:15 np0005531754 systemd[1]: libpod-conmon-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Deactivated successfully.
Nov 22 00:42:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:16 np0005531754 python3.9[228216]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.717325663 +0000 UTC m=+0.070888198 container create 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:16 np0005531754 systemd[1]: Started libpod-conmon-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope.
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.687545954 +0000 UTC m=+0.041108539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.828069105 +0000 UTC m=+0.181631710 container init 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.840764381 +0000 UTC m=+0.194326916 container start 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.844910401 +0000 UTC m=+0.198472946 container attach 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:42:16 np0005531754 happy_tu[228344]: 167 167
Nov 22 00:42:16 np0005531754 systemd[1]: libpod-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope: Deactivated successfully.
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.849516283 +0000 UTC m=+0.203078858 container died 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 00:42:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1ed2dfd930ede31d4ff5cd2497881540fe8167930b76fd8acaf39ad078da8b6d-merged.mount: Deactivated successfully.
Nov 22 00:42:16 np0005531754 podman[228298]: 2025-11-22 05:42:16.914823682 +0000 UTC m=+0.268386217 container remove 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:42:16 np0005531754 systemd[1]: libpod-conmon-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope: Deactivated successfully.
Nov 22 00:42:17 np0005531754 podman[228420]: 2025-11-22 05:42:17.152811913 +0000 UTC m=+0.073686491 container create 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 00:42:17 np0005531754 python3.9[228414]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790135.904878-172-96704401735867/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:17 np0005531754 podman[228420]: 2025-11-22 05:42:17.122402089 +0000 UTC m=+0.043276727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:17 np0005531754 systemd[1]: Started libpod-conmon-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope.
Nov 22 00:42:17 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:17 np0005531754 podman[228420]: 2025-11-22 05:42:17.326965435 +0000 UTC m=+0.247840023 container init 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:17 np0005531754 podman[228420]: 2025-11-22 05:42:17.339263321 +0000 UTC m=+0.260137879 container start 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:42:17 np0005531754 podman[228420]: 2025-11-22 05:42:17.342941958 +0000 UTC m=+0.263816516 container attach 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]: {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    "0": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "devices": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "/dev/loop3"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            ],
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_name": "ceph_lv0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_size": "21470642176",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "name": "ceph_lv0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "tags": {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_name": "ceph",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.crush_device_class": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.encrypted": "0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_id": "0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.vdo": "0"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            },
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "vg_name": "ceph_vg0"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        }
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    ],
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    "1": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "devices": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "/dev/loop4"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            ],
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_name": "ceph_lv1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_size": "21470642176",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "name": "ceph_lv1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "tags": {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_name": "ceph",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.crush_device_class": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.encrypted": "0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_id": "1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.vdo": "0"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            },
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "vg_name": "ceph_vg1"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        }
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    ],
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    "2": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "devices": [
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "/dev/loop5"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            ],
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_name": "ceph_lv2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_size": "21470642176",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "name": "ceph_lv2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "tags": {
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.cluster_name": "ceph",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.crush_device_class": "",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.encrypted": "0",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osd_id": "2",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:                "ceph.vdo": "0"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            },
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "type": "block",
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:            "vg_name": "ceph_vg2"
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:        }
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]:    ]
Nov 22 00:42:18 np0005531754 wizardly_shtern[228437]: }
Nov 22 00:42:18 np0005531754 python3.9[228593]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:18 np0005531754 podman[228420]: 2025-11-22 05:42:18.173795387 +0000 UTC m=+1.094669935 container died 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:42:18 np0005531754 systemd[1]: libpod-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope: Deactivated successfully.
Nov 22 00:42:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861-merged.mount: Deactivated successfully.
Nov 22 00:42:18 np0005531754 podman[228420]: 2025-11-22 05:42:18.226887462 +0000 UTC m=+1.147762010 container remove 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:42:18 np0005531754 systemd[1]: libpod-conmon-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope: Deactivated successfully.
Nov 22 00:42:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:18 np0005531754 podman[228855]: 2025-11-22 05:42:18.965118319 +0000 UTC m=+0.067763856 container create a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:42:19 np0005531754 systemd[1]: Started libpod-conmon-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope.
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:18.936064809 +0000 UTC m=+0.038710436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:19.065672231 +0000 UTC m=+0.168317838 container init a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:19.077747871 +0000 UTC m=+0.180393438 container start a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:19.081947392 +0000 UTC m=+0.184592989 container attach a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:42:19 np0005531754 kind_pare[228915]: 167 167
Nov 22 00:42:19 np0005531754 systemd[1]: libpod-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope: Deactivated successfully.
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:19.085737842 +0000 UTC m=+0.188383409 container died a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:42:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1d05929e956d110c9a5ac31ba218a20a12e6e632dbcc6b097f9b30b26d06ad56-merged.mount: Deactivated successfully.
Nov 22 00:42:19 np0005531754 podman[228855]: 2025-11-22 05:42:19.142513865 +0000 UTC m=+0.245159422 container remove a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:42:19 np0005531754 systemd[1]: libpod-conmon-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope: Deactivated successfully.
Nov 22 00:42:19 np0005531754 podman[228942]: 2025-11-22 05:42:19.390309516 +0000 UTC m=+0.074462392 container create 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:42:19 np0005531754 python3.9[228920]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:42:19 np0005531754 systemd[1]: Started libpod-conmon-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope.
Nov 22 00:42:19 np0005531754 podman[228942]: 2025-11-22 05:42:19.356871942 +0000 UTC m=+0.041024908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:42:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:42:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:19 np0005531754 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 00:42:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:42:19 np0005531754 systemd[1]: Stopped Load Kernel Modules.
Nov 22 00:42:19 np0005531754 systemd[1]: Stopping Load Kernel Modules...
Nov 22 00:42:19 np0005531754 systemd[1]: Starting Load Kernel Modules...
Nov 22 00:42:19 np0005531754 podman[228942]: 2025-11-22 05:42:19.496632941 +0000 UTC m=+0.180785887 container init 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:42:19 np0005531754 systemd[1]: Finished Load Kernel Modules.
Nov 22 00:42:19 np0005531754 podman[228942]: 2025-11-22 05:42:19.509609826 +0000 UTC m=+0.193762732 container start 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:42:19 np0005531754 podman[228942]: 2025-11-22 05:42:19.513732395 +0000 UTC m=+0.197885301 container attach 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:42:20 np0005531754 python3.9[229122]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]: {
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_id": 1,
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "type": "bluestore"
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    },
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_id": 2,
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "type": "bluestore"
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    },
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_id": 0,
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:        "type": "bluestore"
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]:    }
Nov 22 00:42:20 np0005531754 gifted_darwin[228961]: }
Nov 22 00:42:20 np0005531754 systemd[1]: libpod-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Deactivated successfully.
Nov 22 00:42:20 np0005531754 systemd[1]: libpod-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Consumed 1.111s CPU time.
Nov 22 00:42:20 np0005531754 podman[228942]: 2025-11-22 05:42:20.622119971 +0000 UTC m=+1.306272847 container died 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:42:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66-merged.mount: Deactivated successfully.
Nov 22 00:42:20 np0005531754 podman[228942]: 2025-11-22 05:42:20.713665846 +0000 UTC m=+1.397818732 container remove 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 22 00:42:20 np0005531754 systemd[1]: libpod-conmon-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Deactivated successfully.
Nov 22 00:42:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:42:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:42:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:20 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 39b2ba35-ef84-45fd-ab26-8e8fd0f46121 does not exist
Nov 22 00:42:20 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 312ca808-3d91-4437-b0a6-b1cfa570b27f does not exist
Nov 22 00:42:21 np0005531754 python3.9[229364]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:42:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:42:22 np0005531754 python3.9[229516]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:42:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:23 np0005531754 python3.9[229668]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:23 np0005531754 python3.9[229791]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790142.4280372-230-123302246964734/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:24 np0005531754 python3.9[229943]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:42:25 np0005531754 python3.9[230096]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:26 np0005531754 python3.9[230248]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:27 np0005531754 python3.9[230400]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:28 np0005531754 python3.9[230552]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:29 np0005531754 python3.9[230704]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:29 np0005531754 python3.9[230856]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:30 np0005531754 python3.9[231008]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:31 np0005531754 python3.9[231160]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:42:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:32 np0005531754 python3.9[231314]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:33 np0005531754 python3.9[231466]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:34 np0005531754 podman[231590]: 2025-11-22 05:42:34.191045611 +0000 UTC m=+0.127792464 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:42:34 np0005531754 python3.9[231638]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:34 np0005531754 python3.9[231723]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:35 np0005531754 python3.9[231875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:36 np0005531754 python3.9[231953]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:42:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:42:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:42:37 np0005531754 python3.9[232105]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:38 np0005531754 python3.9[232257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:38 np0005531754 python3.9[232335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:39 np0005531754 python3.9[232487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:40 np0005531754 python3.9[232565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:41 np0005531754 python3.9[232717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:42:41 np0005531754 systemd[1]: Reloading.
Nov 22 00:42:41 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:42:41 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:42:42 np0005531754 python3.9[232909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:42 np0005531754 podman[232959]: 2025-11-22 05:42:42.72278966 +0000 UTC m=+0.091191885 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 00:42:42 np0005531754 python3.9[233006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:42:43
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta']
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:42:43 np0005531754 python3.9[233159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:42:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:42:44 np0005531754 python3.9[233237]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:45 np0005531754 python3.9[233389]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:42:45 np0005531754 systemd[1]: Reloading.
Nov 22 00:42:45 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:42:45 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:42:45 np0005531754 systemd[1]: Starting Create netns directory...
Nov 22 00:42:45 np0005531754 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 00:42:45 np0005531754 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 00:42:45 np0005531754 systemd[1]: Finished Create netns directory.
Nov 22 00:42:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:46 np0005531754 python3.9[233583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:47 np0005531754 python3.9[233735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:48 np0005531754 python3.9[233858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790167.18557-437-72927228338621/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:49 np0005531754 python3.9[234010]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:42:50 np0005531754 python3.9[234162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:42:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:51 np0005531754 python3.9[234285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790169.6121979-462-42850839516937/.source.json _original_basename=.awdj5eot follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:51 np0005531754 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 22 00:42:51 np0005531754 python3.9[234438]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:42:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:52 np0005531754 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:42:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:42:54 np0005531754 python3.9[234866]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 22 00:42:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:55 np0005531754 python3.9[235018]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 00:42:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:56 np0005531754 python3.9[235170]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 00:42:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:42:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:42:58 np0005531754 python3[235349]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 00:43:00 np0005531754 podman[235362]: 2025-11-22 05:43:00.110625075 +0000 UTC m=+1.344840068 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 00:43:00 np0005531754 podman[235418]: 2025-11-22 05:43:00.321943111 +0000 UTC m=+0.068840374 container create 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:43:00 np0005531754 podman[235418]: 2025-11-22 05:43:00.289270296 +0000 UTC m=+0.036167609 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 00:43:00 np0005531754 python3[235349]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 00:43:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:01 np0005531754 python3.9[235609]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:43:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:02 np0005531754 python3.9[235763]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:02 np0005531754 python3.9[235839]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:43:03 np0005531754 python3.9[235990]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763790182.957054-550-250754807445925/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:04 np0005531754 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 00:43:04 np0005531754 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 00:43:04 np0005531754 python3.9[236066]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:43:04 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:04 np0005531754 podman[236068]: 2025-11-22 05:43:04.497217531 +0000 UTC m=+0.108170215 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 00:43:04 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:04 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:05 np0005531754 python3.9[236204]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:05 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:05 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:05 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:05 np0005531754 systemd[1]: Starting multipathd container...
Nov 22 00:43:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:06 np0005531754 systemd[1]: Started /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 00:43:06 np0005531754 podman[236244]: 2025-11-22 05:43:06.166283113 +0000 UTC m=+0.154175953 container init 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 00:43:06 np0005531754 multipathd[236259]: + sudo -E kolla_set_configs
Nov 22 00:43:06 np0005531754 podman[236244]: 2025-11-22 05:43:06.198806255 +0000 UTC m=+0.186699105 container start 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 00:43:06 np0005531754 podman[236244]: multipathd
Nov 22 00:43:06 np0005531754 systemd[1]: Started multipathd container.
Nov 22 00:43:06 np0005531754 multipathd[236259]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:43:06 np0005531754 multipathd[236259]: INFO:__main__:Validating config file
Nov 22 00:43:06 np0005531754 multipathd[236259]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:43:06 np0005531754 multipathd[236259]: INFO:__main__:Writing out command to execute
Nov 22 00:43:06 np0005531754 multipathd[236259]: ++ cat /run_command
Nov 22 00:43:06 np0005531754 multipathd[236259]: + CMD='/usr/sbin/multipathd -d'
Nov 22 00:43:06 np0005531754 multipathd[236259]: + ARGS=
Nov 22 00:43:06 np0005531754 multipathd[236259]: + sudo kolla_copy_cacerts
Nov 22 00:43:06 np0005531754 podman[236266]: 2025-11-22 05:43:06.305345196 +0000 UTC m=+0.086286886 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 00:43:06 np0005531754 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 00:43:06 np0005531754 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.service: Failed with result 'exit-code'.
Nov 22 00:43:06 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:43:06 np0005531754 multipathd[236259]: + [[ ! -n '' ]]
Nov 22 00:43:06 np0005531754 multipathd[236259]: + . kolla_extend_start
Nov 22 00:43:06 np0005531754 multipathd[236259]: Running command: '/usr/sbin/multipathd -d'
Nov 22 00:43:06 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:43:06 np0005531754 multipathd[236259]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 00:43:06 np0005531754 multipathd[236259]: + umask 0022
Nov 22 00:43:06 np0005531754 multipathd[236259]: + exec /usr/sbin/multipathd -d
Nov 22 00:43:06 np0005531754 multipathd[236259]: 3639.782810 | --------start up--------
Nov 22 00:43:06 np0005531754 multipathd[236259]: 3639.782835 | read /etc/multipath.conf
Nov 22 00:43:06 np0005531754 multipathd[236259]: 3639.791450 | path checkers start up
Nov 22 00:43:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:07 np0005531754 python3.9[236449]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:43:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:07 np0005531754 python3.9[236603]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:08 np0005531754 python3.9[236768]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:43:08 np0005531754 systemd[1]: Stopping multipathd container...
Nov 22 00:43:08 np0005531754 multipathd[236259]: 3642.405785 | exit (signal)
Nov 22 00:43:08 np0005531754 multipathd[236259]: 3642.406512 | --------shut down-------
Nov 22 00:43:09 np0005531754 systemd[1]: libpod-90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.scope: Deactivated successfully.
Nov 22 00:43:09 np0005531754 podman[236772]: 2025-11-22 05:43:09.014200368 +0000 UTC m=+0.095668963 container died 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 22 00:43:09 np0005531754 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.timer: Deactivated successfully.
Nov 22 00:43:09 np0005531754 systemd[1]: Stopped /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 00:43:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-userdata-shm.mount: Deactivated successfully.
Nov 22 00:43:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2-merged.mount: Deactivated successfully.
Nov 22 00:43:09 np0005531754 podman[236772]: 2025-11-22 05:43:09.263704226 +0000 UTC m=+0.345172821 container cleanup 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 00:43:09 np0005531754 podman[236772]: multipathd
Nov 22 00:43:09 np0005531754 podman[236799]: multipathd
Nov 22 00:43:09 np0005531754 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 00:43:09 np0005531754 systemd[1]: Stopped multipathd container.
Nov 22 00:43:09 np0005531754 systemd[1]: Starting multipathd container...
Nov 22 00:43:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:09 np0005531754 systemd[1]: Started /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 00:43:09 np0005531754 podman[236812]: 2025-11-22 05:43:09.552394969 +0000 UTC m=+0.160803749 container init 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 00:43:09 np0005531754 multipathd[236828]: + sudo -E kolla_set_configs
Nov 22 00:43:09 np0005531754 podman[236812]: 2025-11-22 05:43:09.5939922 +0000 UTC m=+0.202400920 container start 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 22 00:43:09 np0005531754 podman[236812]: multipathd
Nov 22 00:43:09 np0005531754 systemd[1]: Started multipathd container.
Nov 22 00:43:09 np0005531754 multipathd[236828]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:43:09 np0005531754 multipathd[236828]: INFO:__main__:Validating config file
Nov 22 00:43:09 np0005531754 multipathd[236828]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:43:09 np0005531754 multipathd[236828]: INFO:__main__:Writing out command to execute
Nov 22 00:43:09 np0005531754 multipathd[236828]: ++ cat /run_command
Nov 22 00:43:09 np0005531754 multipathd[236828]: + CMD='/usr/sbin/multipathd -d'
Nov 22 00:43:09 np0005531754 multipathd[236828]: + ARGS=
Nov 22 00:43:09 np0005531754 multipathd[236828]: + sudo kolla_copy_cacerts
Nov 22 00:43:09 np0005531754 podman[236835]: 2025-11-22 05:43:09.702781921 +0000 UTC m=+0.090925319 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:43:09 np0005531754 multipathd[236828]: + [[ ! -n '' ]]
Nov 22 00:43:09 np0005531754 multipathd[236828]: + . kolla_extend_start
Nov 22 00:43:09 np0005531754 multipathd[236828]: Running command: '/usr/sbin/multipathd -d'
Nov 22 00:43:09 np0005531754 multipathd[236828]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 00:43:09 np0005531754 multipathd[236828]: + umask 0022
Nov 22 00:43:09 np0005531754 multipathd[236828]: + exec /usr/sbin/multipathd -d
Nov 22 00:43:09 np0005531754 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-1ca5ff9852e0b879.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 00:43:09 np0005531754 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-1ca5ff9852e0b879.service: Failed with result 'exit-code'.
Nov 22 00:43:09 np0005531754 multipathd[236828]: 3643.157664 | --------start up--------
Nov 22 00:43:09 np0005531754 multipathd[236828]: 3643.157683 | read /etc/multipath.conf
Nov 22 00:43:09 np0005531754 multipathd[236828]: 3643.164534 | path checkers start up
Nov 22 00:43:10 np0005531754 python3.9[237019]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:11 np0005531754 python3.9[237171]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 00:43:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:12 np0005531754 python3.9[237323]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 00:43:12 np0005531754 kernel: Key type psk registered
Nov 22 00:43:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:13 np0005531754 podman[237456]: 2025-11-22 05:43:13.240557612 +0000 UTC m=+0.084816257 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:43:13 np0005531754 python3.9[237500]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:14 np0005531754 python3.9[237623]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790192.797749-630-66181010933327/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:14 np0005531754 python3.9[237775]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:15 np0005531754 python3.9[237927]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:43:15 np0005531754 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 00:43:15 np0005531754 systemd[1]: Stopped Load Kernel Modules.
Nov 22 00:43:15 np0005531754 systemd[1]: Stopping Load Kernel Modules...
Nov 22 00:43:15 np0005531754 systemd[1]: Starting Load Kernel Modules...
Nov 22 00:43:16 np0005531754 systemd[1]: Finished Load Kernel Modules.
Nov 22 00:43:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:16 np0005531754 python3.9[238083]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 00:43:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:19 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:19 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:19 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:19 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:19 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:19 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:20 np0005531754 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 00:43:20 np0005531754 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 00:43:20 np0005531754 lvm[238194]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 00:43:20 np0005531754 lvm[238194]: VG ceph_vg2 finished
Nov 22 00:43:20 np0005531754 lvm[238195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 00:43:20 np0005531754 lvm[238195]: VG ceph_vg0 finished
Nov 22 00:43:20 np0005531754 lvm[238197]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 00:43:20 np0005531754 lvm[238197]: VG ceph_vg1 finished
Nov 22 00:43:20 np0005531754 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 00:43:20 np0005531754 systemd[1]: Starting man-db-cache-update.service...
Nov 22 00:43:20 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:20 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:20 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:20 np0005531754 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:21 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f83840c3-6e87-4edd-9555-1050f34df778 does not exist
Nov 22 00:43:21 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f04eb052-a28e-4b2c-b06d-503efa967d52 does not exist
Nov 22 00:43:21 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 354c2b91-6ac4-4ea9-94c0-2abb13472046 does not exist
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:43:22 np0005531754 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 00:43:22 np0005531754 systemd[1]: Finished man-db-cache-update.service.
Nov 22 00:43:22 np0005531754 systemd[1]: man-db-cache-update.service: Consumed 1.803s CPU time.
Nov 22 00:43:22 np0005531754 systemd[1]: run-r377decdc017046f98ac041f4fc40288c.service: Deactivated successfully.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.331017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202331136, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1160, "num_deletes": 505, "total_data_size": 1252798, "memory_usage": 1285744, "flush_reason": "Manual Compaction"}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202339051, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1240629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13668, "largest_seqno": 14827, "table_properties": {"data_size": 1235470, "index_size": 2171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13526, "raw_average_key_size": 17, "raw_value_size": 1223175, "raw_average_value_size": 1613, "num_data_blocks": 99, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790117, "oldest_key_time": 1763790117, "file_creation_time": 1763790202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 8145 microseconds, and 3448 cpu microseconds.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.339143) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1240629 bytes OK
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.339219) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341380) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341392) EVENT_LOG_v1 {"time_micros": 1763790202341388, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1246381, prev total WAL file size 1246381, number of live WAL files 2.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.342124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1211KB)], [32(7510KB)]
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202342170, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8931379, "oldest_snapshot_seqno": -1}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3788 keys, 7016608 bytes, temperature: kUnknown
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202401122, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7016608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6989620, "index_size": 16446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92922, "raw_average_key_size": 24, "raw_value_size": 6919303, "raw_average_value_size": 1826, "num_data_blocks": 696, "num_entries": 3788, "num_filter_entries": 3788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.401432) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7016608 bytes
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.403169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.3 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.3 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(12.9) write-amplify(5.7) OK, records in: 4811, records dropped: 1023 output_compression: NoCompression
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.403201) EVENT_LOG_v1 {"time_micros": 1763790202403186, "job": 14, "event": "compaction_finished", "compaction_time_micros": 59032, "compaction_time_cpu_micros": 19000, "output_level": 6, "num_output_files": 1, "total_output_size": 7016608, "num_input_records": 4811, "num_output_records": 3788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202403711, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202405886, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.342004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:43:22 np0005531754 python3.9[239773]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.451177135 +0000 UTC m=+0.069755258 container create b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:43:22 np0005531754 systemd[1]: Started libpod-conmon-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope.
Nov 22 00:43:22 np0005531754 systemd[1]: Stopping Open-iSCSI...
Nov 22 00:43:22 np0005531754 iscsid[226967]: iscsid shutting down.
Nov 22 00:43:22 np0005531754 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 00:43:22 np0005531754 systemd[1]: Stopped Open-iSCSI.
Nov 22 00:43:22 np0005531754 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.424900979 +0000 UTC m=+0.043479112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:22 np0005531754 systemd[1]: Starting Open-iSCSI...
Nov 22 00:43:22 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:22 np0005531754 systemd[1]: Started Open-iSCSI.
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.54992874 +0000 UTC m=+0.168506913 container init b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.561600879 +0000 UTC m=+0.180179002 container start b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.565461431 +0000 UTC m=+0.184039624 container attach b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:43:22 np0005531754 naughty_goodall[239830]: 167 167
Nov 22 00:43:22 np0005531754 systemd[1]: libpod-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope: Deactivated successfully.
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.571571563 +0000 UTC m=+0.190149696 container died b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:43:22 np0005531754 systemd[1]: var-lib-containers-storage-overlay-05e316cc3ea5a2086b85e1006796c397914adaac65b020ff75e10aae981bb836-merged.mount: Deactivated successfully.
Nov 22 00:43:22 np0005531754 podman[239812]: 2025-11-22 05:43:22.630210625 +0000 UTC m=+0.248788758 container remove b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:43:22 np0005531754 systemd[1]: libpod-conmon-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope: Deactivated successfully.
Nov 22 00:43:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:22 np0005531754 podman[239882]: 2025-11-22 05:43:22.874793782 +0000 UTC m=+0.060659358 container create 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 00:43:22 np0005531754 systemd[1]: Started libpod-conmon-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope.
Nov 22 00:43:22 np0005531754 podman[239882]: 2025-11-22 05:43:22.844069278 +0000 UTC m=+0.029934904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:22 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:22 np0005531754 podman[239882]: 2025-11-22 05:43:22.994854421 +0000 UTC m=+0.180720047 container init 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:43:23 np0005531754 podman[239882]: 2025-11-22 05:43:23.008284786 +0000 UTC m=+0.194150372 container start 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:43:23 np0005531754 podman[239882]: 2025-11-22 05:43:23.013272158 +0000 UTC m=+0.199137734 container attach 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:43:23 np0005531754 python3.9[240025]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 00:43:24 np0005531754 kind_carver[239946]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:43:24 np0005531754 kind_carver[239946]: --> relative data size: 1.0
Nov 22 00:43:24 np0005531754 kind_carver[239946]: --> All data devices are unavailable
Nov 22 00:43:24 np0005531754 systemd[1]: libpod-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Deactivated successfully.
Nov 22 00:43:24 np0005531754 podman[239882]: 2025-11-22 05:43:24.189047669 +0000 UTC m=+1.374913225 container died 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:43:24 np0005531754 systemd[1]: libpod-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Consumed 1.122s CPU time.
Nov 22 00:43:24 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7-merged.mount: Deactivated successfully.
Nov 22 00:43:24 np0005531754 podman[239882]: 2025-11-22 05:43:24.255727265 +0000 UTC m=+1.441592821 container remove 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:43:24 np0005531754 systemd[1]: libpod-conmon-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Deactivated successfully.
Nov 22 00:43:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:24 np0005531754 python3.9[240275]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:25 np0005531754 podman[240384]: 2025-11-22 05:43:25.134436291 +0000 UTC m=+0.060867433 container create e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:43:25 np0005531754 systemd[1]: Started libpod-conmon-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope.
Nov 22 00:43:25 np0005531754 podman[240384]: 2025-11-22 05:43:25.107980081 +0000 UTC m=+0.034411283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:25 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:25 np0005531754 podman[240384]: 2025-11-22 05:43:25.24806903 +0000 UTC m=+0.174500182 container init e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:43:25 np0005531754 podman[240384]: 2025-11-22 05:43:25.260126029 +0000 UTC m=+0.186557181 container start e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 00:43:25 np0005531754 podman[240384]: 2025-11-22 05:43:25.264749592 +0000 UTC m=+0.191180734 container attach e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 00:43:25 np0005531754 hopeful_dubinsky[240424]: 167 167
Nov 22 00:43:25 np0005531754 systemd[1]: libpod-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope: Deactivated successfully.
Nov 22 00:43:25 np0005531754 podman[240452]: 2025-11-22 05:43:25.314086408 +0000 UTC m=+0.033117789 container died e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:43:25 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4682a9d945cb53f8f22a0f310a26fb42021d4193298ffab416588c1443d50ee2-merged.mount: Deactivated successfully.
Nov 22 00:43:25 np0005531754 podman[240452]: 2025-11-22 05:43:25.352812373 +0000 UTC m=+0.071843764 container remove e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 00:43:25 np0005531754 systemd[1]: libpod-conmon-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope: Deactivated successfully.
Nov 22 00:43:25 np0005531754 podman[240527]: 2025-11-22 05:43:25.553717842 +0000 UTC m=+0.062654170 container create a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:43:25 np0005531754 systemd[1]: Started libpod-conmon-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope.
Nov 22 00:43:25 np0005531754 podman[240527]: 2025-11-22 05:43:25.522562288 +0000 UTC m=+0.031498696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:25 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:25 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:25 np0005531754 podman[240527]: 2025-11-22 05:43:25.664261199 +0000 UTC m=+0.173197557 container init a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:43:25 np0005531754 podman[240527]: 2025-11-22 05:43:25.676548655 +0000 UTC m=+0.185485003 container start a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:43:25 np0005531754 podman[240527]: 2025-11-22 05:43:25.699532243 +0000 UTC m=+0.208468641 container attach a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 00:43:25 np0005531754 python3.9[240572]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:43:25 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:25 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:26 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]: {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    "0": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "devices": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "/dev/loop3"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            ],
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_name": "ceph_lv0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_size": "21470642176",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "name": "ceph_lv0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "tags": {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_name": "ceph",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.crush_device_class": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.encrypted": "0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_id": "0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.vdo": "0"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            },
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "vg_name": "ceph_vg0"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        }
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    ],
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    "1": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "devices": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "/dev/loop4"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            ],
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_name": "ceph_lv1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_size": "21470642176",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "name": "ceph_lv1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "tags": {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_name": "ceph",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.crush_device_class": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.encrypted": "0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_id": "1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.vdo": "0"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            },
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "vg_name": "ceph_vg1"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        }
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    ],
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    "2": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "devices": [
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "/dev/loop5"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            ],
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_name": "ceph_lv2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_size": "21470642176",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "name": "ceph_lv2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "tags": {
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.cluster_name": "ceph",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.crush_device_class": "",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.encrypted": "0",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osd_id": "2",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:                "ceph.vdo": "0"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            },
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "type": "block",
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:            "vg_name": "ceph_vg2"
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:        }
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]:    ]
Nov 22 00:43:26 np0005531754 fervent_haibt[240573]: }
Nov 22 00:43:26 np0005531754 systemd[1]: libpod-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope: Deactivated successfully.
Nov 22 00:43:26 np0005531754 podman[240527]: 2025-11-22 05:43:26.475400846 +0000 UTC m=+0.984337164 container died a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 22 00:43:26 np0005531754 systemd[1]: var-lib-containers-storage-overlay-90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1-merged.mount: Deactivated successfully.
Nov 22 00:43:26 np0005531754 podman[240527]: 2025-11-22 05:43:26.541738563 +0000 UTC m=+1.050674891 container remove a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:43:26 np0005531754 systemd[1]: libpod-conmon-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope: Deactivated successfully.
Nov 22 00:43:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:26 np0005531754 python3.9[240830]: ansible-ansible.builtin.service_facts Invoked
Nov 22 00:43:27 np0005531754 network[240894]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 00:43:27 np0005531754 network[240897]: 'network-scripts' will be removed from distribution in near future.
Nov 22 00:43:27 np0005531754 network[240898]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 00:43:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:27 np0005531754 podman[240941]: 2025-11-22 05:43:27.341208521 +0000 UTC m=+0.060257257 container create b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:43:27 np0005531754 podman[240941]: 2025-11-22 05:43:27.310139138 +0000 UTC m=+0.029187934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:27 np0005531754 systemd[1]: Started libpod-conmon-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope.
Nov 22 00:43:28 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:28 np0005531754 podman[240941]: 2025-11-22 05:43:28.041616875 +0000 UTC m=+0.760665621 container init b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:43:28 np0005531754 podman[240941]: 2025-11-22 05:43:28.049861584 +0000 UTC m=+0.768910290 container start b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:43:28 np0005531754 podman[240941]: 2025-11-22 05:43:28.052929365 +0000 UTC m=+0.771978111 container attach b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 00:43:28 np0005531754 gallant_ramanujan[240959]: 167 167
Nov 22 00:43:28 np0005531754 podman[240941]: 2025-11-22 05:43:28.056040907 +0000 UTC m=+0.775089623 container died b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:43:28 np0005531754 systemd[1]: libpod-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope: Deactivated successfully.
Nov 22 00:43:28 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a73aeeb86f222f0f35b59ed5d5db01699d820b94f17cb6fddd7a322f07ce3176-merged.mount: Deactivated successfully.
Nov 22 00:43:28 np0005531754 podman[240941]: 2025-11-22 05:43:28.094037813 +0000 UTC m=+0.813086539 container remove b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:43:28 np0005531754 systemd[1]: libpod-conmon-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope: Deactivated successfully.
Nov 22 00:43:28 np0005531754 podman[240993]: 2025-11-22 05:43:28.316028571 +0000 UTC m=+0.071747300 container create d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:43:28 np0005531754 systemd[1]: Started libpod-conmon-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope.
Nov 22 00:43:28 np0005531754 podman[240993]: 2025-11-22 05:43:28.288536403 +0000 UTC m=+0.044255182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:43:28 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:43:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:43:28 np0005531754 podman[240993]: 2025-11-22 05:43:28.41532563 +0000 UTC m=+0.171044329 container init d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:43:28 np0005531754 podman[240993]: 2025-11-22 05:43:28.428014326 +0000 UTC m=+0.183733015 container start d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:43:28 np0005531754 podman[240993]: 2025-11-22 05:43:28.431387605 +0000 UTC m=+0.187106284 container attach d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:43:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:29 np0005531754 elegant_austin[241015]: {
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_id": 1,
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "type": "bluestore"
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    },
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_id": 2,
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "type": "bluestore"
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    },
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_id": 0,
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:        "type": "bluestore"
Nov 22 00:43:29 np0005531754 elegant_austin[241015]:    }
Nov 22 00:43:29 np0005531754 elegant_austin[241015]: }
Nov 22 00:43:29 np0005531754 systemd[1]: libpod-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Deactivated successfully.
Nov 22 00:43:29 np0005531754 podman[240993]: 2025-11-22 05:43:29.462573879 +0000 UTC m=+1.218292608 container died d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:43:29 np0005531754 systemd[1]: libpod-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Consumed 1.043s CPU time.
Nov 22 00:43:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e-merged.mount: Deactivated successfully.
Nov 22 00:43:29 np0005531754 podman[240993]: 2025-11-22 05:43:29.537440591 +0000 UTC m=+1.293159290 container remove d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:43:29 np0005531754 systemd[1]: libpod-conmon-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Deactivated successfully.
Nov 22 00:43:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:43:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:43:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:29 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d5082e7e-3cc2-4864-b924-a0720057f00b does not exist
Nov 22 00:43:29 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6941ae91-2600-4596-9a6d-ac275627ac8f does not exist
Nov 22 00:43:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:43:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:32 np0005531754 python3.9[241363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:33 np0005531754 python3.9[241516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:34 np0005531754 python3.9[241671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:34 np0005531754 python3.9[241824]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:35 np0005531754 podman[241826]: 2025-11-22 05:43:35.120992999 +0000 UTC m=+0.122772551 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:43:35 np0005531754 python3.9[242003]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:36 np0005531754 python3.9[242156]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:43:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:43:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:43:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:37 np0005531754 python3.9[242309]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:38 np0005531754 python3.9[242462]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:43:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:39 np0005531754 python3.9[242615]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:39 np0005531754 podman[242739]: 2025-11-22 05:43:39.981335508 +0000 UTC m=+0.087105028 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:43:40 np0005531754 python3.9[242787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:40 np0005531754 python3.9[242939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:41 np0005531754 python3.9[243091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:42 np0005531754 python3.9[243243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:43 np0005531754 python3.9[243395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:43:43
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control']
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:43:43 np0005531754 podman[243519]: 2025-11-22 05:43:43.827059922 +0000 UTC m=+0.094907844 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:43:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:43:44 np0005531754 python3.9[243565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:44 np0005531754 python3.9[243717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:45 np0005531754 python3.9[243869]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:46 np0005531754 python3.9[244021]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:47 np0005531754 python3.9[244173]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:47 np0005531754 python3.9[244325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:48 np0005531754 python3.9[244477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:49 np0005531754 python3.9[244629]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:50 np0005531754 python3.9[244781]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:50 np0005531754 python3.9[244933]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:43:51 np0005531754 python3.9[245085]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:52 np0005531754 python3.9[245237]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:43:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:43:53 np0005531754 python3.9[245389]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:43:53 np0005531754 systemd[1]: Reloading.
Nov 22 00:43:53 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:43:53 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:43:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:55 np0005531754 python3.9[245575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:55 np0005531754 python3.9[245728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:56 np0005531754 python3.9[245881]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:43:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3350 writes, 14K keys, 3350 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3350 writes, 3350 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5837 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.52 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    116.3      0.13              0.06         7    0.019       0      0       0.0       0.0#012  L6      1/0    6.69 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    157.9    130.8      0.31              0.15         6    0.052     24K   3186       0.0       0.0#012 Sum      1/0    6.69 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    111.3    126.5      0.44              0.21        13    0.034     24K   3186       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    119.8    120.2      0.28              0.13         8    0.035     17K   2458       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    157.9    130.8      0.31              0.15         6    0.052     24K   3186       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.3      0.12              0.06         6    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 308.00 MB usage: 1.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(98,1.31 MB,0.425136%) FilterBlock(14,74.67 KB,0.0236759%) IndexBlock(14,152.80 KB,0.0484467%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 00:43:57 np0005531754 python3.9[246034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:43:57 np0005531754 python3.9[246187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:43:58 np0005531754 python3.9[246340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:43:59 np0005531754 python3.9[246493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:44:00 np0005531754 python3.9[246646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 00:44:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:02 np0005531754 python3.9[246799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:02 np0005531754 python3.9[246951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:03 np0005531754 python3.9[247103]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:04 np0005531754 python3.9[247255]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:05 np0005531754 python3.9[247407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:05 np0005531754 podman[247408]: 2025-11-22 05:44:05.355678491 +0000 UTC m=+0.115486851 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:44:06 np0005531754 python3.9[247585]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:06 np0005531754 python3.9[247737]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:07 np0005531754 python3.9[247889]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:08 np0005531754 python3.9[248041]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:09 np0005531754 python3.9[248193]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:10 np0005531754 podman[248218]: 2025-11-22 05:44:10.243824263 +0000 UTC m=+0.086024485 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 22 00:44:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:14 np0005531754 podman[248240]: 2025-11-22 05:44:14.199573318 +0000 UTC m=+0.062668403 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 00:44:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:14 np0005531754 python3.9[248386]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 00:44:15 np0005531754 python3.9[248539]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 00:44:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:16 np0005531754 python3.9[248697]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 00:44:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:17 np0005531754 systemd-logind[798]: New session 50 of user zuul.
Nov 22 00:44:17 np0005531754 systemd[1]: Started Session 50 of User zuul.
Nov 22 00:44:18 np0005531754 systemd[1]: session-50.scope: Deactivated successfully.
Nov 22 00:44:18 np0005531754 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Nov 22 00:44:18 np0005531754 systemd-logind[798]: Removed session 50.
Nov 22 00:44:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:18 np0005531754 python3.9[248883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:19 np0005531754 python3.9[249004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790258.2713304-1249-237980532808694/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:20 np0005531754 python3.9[249154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:20 np0005531754 python3.9[249230]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:21 np0005531754 python3.9[249380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:22 np0005531754 python3.9[249501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790260.9557655-1249-128247134346734/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:22 np0005531754 python3.9[249651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:23 np0005531754 python3.9[249772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790262.3930857-1249-145795537720018/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:24 np0005531754 python3.9[249922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:25 np0005531754 python3.9[250043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790263.877357-1249-142622528374272/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:25 np0005531754 python3.9[250195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:26 np0005531754 python3.9[250316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790265.358967-1249-142473801834477/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:27 np0005531754 python3.9[250468]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:44:28 np0005531754 python3.9[250620]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:44:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:29 np0005531754 python3.9[250772]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:44:30 np0005531754 python3.9[250948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:30 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6af12e0f-8163-47d4-9cf2-9167c6aeb644 does not exist
Nov 22 00:44:30 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 98a69249-7b26-4294-9adb-f7cdbb952225 does not exist
Nov 22 00:44:30 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev b4a7dfd6-46b8-4a0a-bdaa-d8081d1b4e31 does not exist
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:44:30 np0005531754 python3.9[251166]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763790269.489063-1356-197832007258183/.source _original_basename=.d8m9nk6c follow=False checksum=e9d7a34410fea986092c054aa091a0303ea4e005 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.482000012 +0000 UTC m=+0.079430920 container create 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:44:31 np0005531754 systemd[1]: Started libpod-conmon-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope.
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.441226244 +0000 UTC m=+0.038657232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.573712377 +0000 UTC m=+0.171143325 container init 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.581328221 +0000 UTC m=+0.178759139 container start 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.585671377 +0000 UTC m=+0.183102315 container attach 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:44:31 np0005531754 peaceful_cannon[251489]: 167 167
Nov 22 00:44:31 np0005531754 systemd[1]: libpod-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope: Deactivated successfully.
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.587885596 +0000 UTC m=+0.185316524 container died 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:44:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay-aad4641e94bf59978a502ad239592bfcd74b8f21e65d4e6934ef71bebab60f57-merged.mount: Deactivated successfully.
Nov 22 00:44:31 np0005531754 podman[251446]: 2025-11-22 05:44:31.65853204 +0000 UTC m=+0.255962938 container remove 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:44:31 np0005531754 systemd[1]: libpod-conmon-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope: Deactivated successfully.
Nov 22 00:44:31 np0005531754 python3.9[251488]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:44:31 np0005531754 podman[251517]: 2025-11-22 05:44:31.850570532 +0000 UTC m=+0.056427786 container create cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:44:31 np0005531754 systemd[1]: Started libpod-conmon-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope.
Nov 22 00:44:31 np0005531754 podman[251517]: 2025-11-22 05:44:31.818731582 +0000 UTC m=+0.024588856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:32 np0005531754 podman[251517]: 2025-11-22 05:44:32.048262234 +0000 UTC m=+0.254119558 container init cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:44:32 np0005531754 podman[251517]: 2025-11-22 05:44:32.062191126 +0000 UTC m=+0.268048410 container start cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:44:32 np0005531754 podman[251517]: 2025-11-22 05:44:32.088899918 +0000 UTC m=+0.294757212 container attach cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:44:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:32 np0005531754 python3.9[251687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:33 np0005531754 compassionate_proskuriakova[251557]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:44:33 np0005531754 compassionate_proskuriakova[251557]: --> relative data size: 1.0
Nov 22 00:44:33 np0005531754 compassionate_proskuriakova[251557]: --> All data devices are unavailable
Nov 22 00:44:33 np0005531754 systemd[1]: libpod-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Deactivated successfully.
Nov 22 00:44:33 np0005531754 podman[251517]: 2025-11-22 05:44:33.247779567 +0000 UTC m=+1.453636841 container died cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:44:33 np0005531754 systemd[1]: libpod-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Consumed 1.141s CPU time.
Nov 22 00:44:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7-merged.mount: Deactivated successfully.
Nov 22 00:44:33 np0005531754 python3.9[251828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790271.9904883-1382-203252441223459/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:33 np0005531754 podman[251517]: 2025-11-22 05:44:33.379692195 +0000 UTC m=+1.585549449 container remove cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:44:33 np0005531754 systemd[1]: libpod-conmon-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Deactivated successfully.
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.1526302 +0000 UTC m=+0.056203099 container create b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:44:34 np0005531754 systemd[1]: Started libpod-conmon-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope.
Nov 22 00:44:34 np0005531754 python3.9[252124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.120937476 +0000 UTC m=+0.024510355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.270524915 +0000 UTC m=+0.174097864 container init b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.279576336 +0000 UTC m=+0.183149235 container start b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.285642538 +0000 UTC m=+0.189215427 container attach b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:44:34 np0005531754 affectionate_gagarin[252154]: 167 167
Nov 22 00:44:34 np0005531754 systemd[1]: libpod-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope: Deactivated successfully.
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.289153512 +0000 UTC m=+0.192726411 container died b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:44:34 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c40f8b270026f3e212d25ce4149fe2d219b0c6d8a0fb508dd049a1e6e351627e-merged.mount: Deactivated successfully.
Nov 22 00:44:34 np0005531754 podman[252137]: 2025-11-22 05:44:34.344076156 +0000 UTC m=+0.247649015 container remove b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:44:34 np0005531754 systemd[1]: libpod-conmon-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope: Deactivated successfully.
Nov 22 00:44:34 np0005531754 podman[252248]: 2025-11-22 05:44:34.522270239 +0000 UTC m=+0.051622128 container create 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:44:34 np0005531754 systemd[1]: Started libpod-conmon-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope.
Nov 22 00:44:34 np0005531754 podman[252248]: 2025-11-22 05:44:34.502295086 +0000 UTC m=+0.031647005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:34 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:34 np0005531754 podman[252248]: 2025-11-22 05:44:34.685809961 +0000 UTC m=+0.215161920 container init 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 00:44:34 np0005531754 podman[252248]: 2025-11-22 05:44:34.699103866 +0000 UTC m=+0.228455785 container start 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:44:34 np0005531754 podman[252248]: 2025-11-22 05:44:34.706968545 +0000 UTC m=+0.236320464 container attach 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:44:34 np0005531754 python3.9[252317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790273.5494294-1397-160924581257337/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]: {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    "0": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "devices": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "/dev/loop3"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            ],
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_name": "ceph_lv0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_size": "21470642176",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "name": "ceph_lv0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "tags": {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_name": "ceph",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.crush_device_class": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.encrypted": "0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_id": "0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.vdo": "0"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            },
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "vg_name": "ceph_vg0"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        }
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    ],
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    "1": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "devices": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "/dev/loop4"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            ],
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_name": "ceph_lv1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_size": "21470642176",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "name": "ceph_lv1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "tags": {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_name": "ceph",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.crush_device_class": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.encrypted": "0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_id": "1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.vdo": "0"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            },
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "vg_name": "ceph_vg1"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        }
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    ],
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    "2": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "devices": [
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "/dev/loop5"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            ],
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_name": "ceph_lv2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_size": "21470642176",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "name": "ceph_lv2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "tags": {
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.cluster_name": "ceph",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.crush_device_class": "",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.encrypted": "0",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osd_id": "2",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:                "ceph.vdo": "0"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            },
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "type": "block",
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:            "vg_name": "ceph_vg2"
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:        }
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]:    ]
Nov 22 00:44:35 np0005531754 fervent_volhard[252288]: }
Nov 22 00:44:35 np0005531754 systemd[1]: libpod-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope: Deactivated successfully.
Nov 22 00:44:35 np0005531754 podman[252248]: 2025-11-22 05:44:35.467304523 +0000 UTC m=+0.996656442 container died 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:44:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd-merged.mount: Deactivated successfully.
Nov 22 00:44:35 np0005531754 podman[252248]: 2025-11-22 05:44:35.551621582 +0000 UTC m=+1.080973471 container remove 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:44:35 np0005531754 systemd[1]: libpod-conmon-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope: Deactivated successfully.
Nov 22 00:44:35 np0005531754 podman[252423]: 2025-11-22 05:44:35.612010023 +0000 UTC m=+0.114483824 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:44:35 np0005531754 python3.9[252539]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.25680262 +0000 UTC m=+0.047048415 container create 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:44:36 np0005531754 systemd[1]: Started libpod-conmon-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope.
Nov 22 00:44:36 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.238208994 +0000 UTC m=+0.028454819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.35015055 +0000 UTC m=+0.140396435 container init 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.362992802 +0000 UTC m=+0.153238627 container start 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.367343638 +0000 UTC m=+0.157589503 container attach 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:44:36 np0005531754 eager_mccarthy[252764]: 167 167
Nov 22 00:44:36 np0005531754 systemd[1]: libpod-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope: Deactivated successfully.
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.371400437 +0000 UTC m=+0.161646262 container died 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:44:36 np0005531754 systemd[1]: var-lib-containers-storage-overlay-d3557e7e58c7a917e278f09fd1513bd122663d0803c0500ea1643f1bf56f62eb-merged.mount: Deactivated successfully.
Nov 22 00:44:36 np0005531754 podman[252708]: 2025-11-22 05:44:36.42852847 +0000 UTC m=+0.218774295 container remove 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:44:36 np0005531754 systemd[1]: libpod-conmon-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope: Deactivated successfully.
Nov 22 00:44:36 np0005531754 podman[252845]: 2025-11-22 05:44:36.677438229 +0000 UTC m=+0.070626425 container create 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:44:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:36 np0005531754 systemd[1]: Started libpod-conmon-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope.
Nov 22 00:44:36 np0005531754 podman[252845]: 2025-11-22 05:44:36.641220443 +0000 UTC m=+0.034408659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:44:36 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:44:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:36 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:44:36 np0005531754 python3.9[252839]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 00:44:36 np0005531754 podman[252845]: 2025-11-22 05:44:36.771098767 +0000 UTC m=+0.164287013 container init 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:44:36 np0005531754 podman[252845]: 2025-11-22 05:44:36.780382695 +0000 UTC m=+0.173570931 container start 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:44:36 np0005531754 podman[252845]: 2025-11-22 05:44:36.786075666 +0000 UTC m=+0.179263902 container attach 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 00:44:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.907 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:44:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.908 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:44:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:44:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]: {
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_id": 1,
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "type": "bluestore"
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    },
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_id": 2,
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "type": "bluestore"
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    },
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_id": 0,
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:        "type": "bluestore"
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]:    }
Nov 22 00:44:37 np0005531754 hopeful_herschel[252862]: }
Nov 22 00:44:37 np0005531754 python3[253023]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 00:44:37 np0005531754 systemd[1]: libpod-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope: Deactivated successfully.
Nov 22 00:44:37 np0005531754 podman[252845]: 2025-11-22 05:44:37.728521872 +0000 UTC m=+1.121710098 container died 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:44:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c-merged.mount: Deactivated successfully.
Nov 22 00:44:37 np0005531754 podman[252845]: 2025-11-22 05:44:37.797752179 +0000 UTC m=+1.190940415 container remove 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:44:37 np0005531754 systemd[1]: libpod-conmon-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope: Deactivated successfully.
Nov 22 00:44:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:44:37 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:44:37 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:37 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev af919446-7208-4a61-b04a-fa858562fecd does not exist
Nov 22 00:44:37 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 206ca234-e3cd-49d9-b8e7-9edba28a5591 does not exist
Nov 22 00:44:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:38 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:38 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:44:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:43 np0005531754 podman[253160]: 2025-11-22 05:44:43.369748391 +0000 UTC m=+2.233404399 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:44:43
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', '.mgr']
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:44:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:44:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:47 np0005531754 podman[253201]: 2025-11-22 05:44:47.213028166 +0000 UTC m=+2.075794284 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:44:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:48 np0005531754 podman[253070]: 2025-11-22 05:44:48.189651314 +0000 UTC m=+10.381877708 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 00:44:48 np0005531754 podman[253246]: 2025-11-22 05:44:48.567909323 +0000 UTC m=+0.037560453 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 00:44:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:49 np0005531754 podman[253246]: 2025-11-22 05:44:49.696648697 +0000 UTC m=+1.166299817 container create dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 00:44:49 np0005531754 python3[253023]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 00:44:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:44:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:44:53 np0005531754 python3.9[253440]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:44:54 np0005531754 python3.9[253594]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 00:44:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:55 np0005531754 python3.9[253746]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 00:44:56 np0005531754 python3[253898]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 00:44:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:56 np0005531754 podman[253934]: 2025-11-22 05:44:56.617662479 +0000 UTC m=+0.052797060 container create 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 22 00:44:56 np0005531754 podman[253934]: 2025-11-22 05:44:56.591067459 +0000 UTC m=+0.026202070 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 00:44:56 np0005531754 python3[253898]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 22 00:44:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:44:57 np0005531754 python3.9[254124]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:44:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:44:58 np0005531754 python3.9[254278]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:44:59 np0005531754 python3.9[254429]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763790298.6206381-1489-28034645692811/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 00:44:59 np0005531754 python3.9[254505]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 00:44:59 np0005531754 systemd[1]: Reloading.
Nov 22 00:45:00 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:45:00 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:45:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:00 np0005531754 python3.9[254616]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 00:45:00 np0005531754 systemd[1]: Reloading.
Nov 22 00:45:01 np0005531754 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 00:45:01 np0005531754 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 00:45:01 np0005531754 systemd[1]: Starting nova_compute container...
Nov 22 00:45:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:01 np0005531754 podman[254655]: 2025-11-22 05:45:01.512071859 +0000 UTC m=+0.119429147 container init 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 22 00:45:01 np0005531754 podman[254655]: 2025-11-22 05:45:01.524402948 +0000 UTC m=+0.131760216 container start 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 00:45:01 np0005531754 podman[254655]: nova_compute
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + sudo -E kolla_set_configs
Nov 22 00:45:01 np0005531754 systemd[1]: Started nova_compute container.
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Validating config file
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying service configuration files
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Deleting /etc/ceph
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Creating directory /etc/ceph
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Writing out command to execute
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:01 np0005531754 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 00:45:01 np0005531754 nova_compute[254670]: ++ cat /run_command
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + CMD=nova-compute
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + ARGS=
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + sudo kolla_copy_cacerts
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + [[ ! -n '' ]]
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + . kolla_extend_start
Nov 22 00:45:01 np0005531754 nova_compute[254670]: Running command: 'nova-compute'
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + umask 0022
Nov 22 00:45:01 np0005531754 nova_compute[254670]: + exec nova-compute
Nov 22 00:45:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:02 np0005531754 python3.9[254832]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 22 00:45:03 np0005531754 python3.9[254982]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.725 254674 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.753 254674 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:45:03 np0005531754 nova_compute[254670]: 2025-11-22 05:45:03.753 254674 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 22 00:45:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.426 254674 INFO nova.virt.driver [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.551 254674 INFO nova.compute.provider_config [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.566 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 python3.9[255136]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 WARNING oslo_config.cfg [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 00:45:04 np0005531754 nova_compute[254670]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 00:45:04 np0005531754 nova_compute[254670]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 00:45:04 np0005531754 nova_compute[254670]: and ``live_migration_inbound_addr`` respectively.
Nov 22 00:45:04 np0005531754 nova_compute[254670]: ).  Its value may be silently ignored in the future.#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_secret_uuid        = 13fdadc6-d566-5465-9ac8-a148ef130da1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.701 254674 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.727 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 22 00:45:04 np0005531754 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 00:45:04 np0005531754 systemd[1]: Started libvirt QEMU daemon.
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.802 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd968fecd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.805 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd968fecd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.806 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.827 254674 WARNING nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 22 00:45:04 np0005531754 nova_compute[254670]: 2025-11-22 05:45:04.827 254674 DEBUG nova.virt.libvirt.volume.mount [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 22 00:45:05 np0005531754 python3.9[255348]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 00:45:05 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.887 254674 INFO nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <host>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <uuid>66851c39-840f-46c8-adfc-77dc6a7d91a4</uuid>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <cpu>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <arch>x86_64</arch>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model>EPYC-Rome-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <vendor>AMD</vendor>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <microcode version='16777317'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <signature family='23' model='49' stepping='0'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='x2apic'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='tsc-deadline'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='osxsave'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='hypervisor'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='tsc_adjust'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='spec-ctrl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='stibp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='arch-capabilities'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='cmp_legacy'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='topoext'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='virt-ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='lbrv'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='tsc-scale'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='vmcb-clean'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='pause-filter'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='pfthreshold'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='svme-addr-chk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='rdctl-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='skip-l1dfl-vmentry'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='mds-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature name='pschange-mc-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <pages unit='KiB' size='4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <pages unit='KiB' size='2048'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <pages unit='KiB' size='1048576'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </cpu>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <power_management>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <suspend_mem/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </power_management>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <iommu support='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <migration_features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <live/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <uri_transports>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <uri_transport>tcp</uri_transport>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <uri_transport>rdma</uri_transport>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </uri_transports>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </migration_features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <topology>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <cells num='1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <cell id='0'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <memory unit='KiB'>7864320</memory>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <pages unit='KiB' size='2048'>0</pages>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <distances>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <sibling id='0' value='10'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          </distances>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          <cpus num='8'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:          </cpus>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        </cell>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </cells>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </topology>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <cache>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </cache>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <secmodel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model>selinux</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <doi>0</doi>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </secmodel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <secmodel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model>dac</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <doi>0</doi>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </secmodel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </host>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <guest>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <os_type>hvm</os_type>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <arch name='i686'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <wordsize>32</wordsize>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <domain type='qemu'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <domain type='kvm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </arch>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <pae/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <nonpae/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <acpi default='on' toggle='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <apic default='on' toggle='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <cpuselection/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <deviceboot/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <disksnapshot default='on' toggle='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <externalSnapshot/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </guest>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <guest>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <os_type>hvm</os_type>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <arch name='x86_64'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <wordsize>64</wordsize>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <domain type='qemu'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <domain type='kvm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </arch>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <acpi default='on' toggle='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <apic default='on' toggle='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <cpuselection/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <deviceboot/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <disksnapshot default='on' toggle='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <externalSnapshot/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </guest>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 
Nov 22 00:45:05 np0005531754 nova_compute[254670]: </capabilities>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: #033[00m
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.895 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.921 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 00:45:05 np0005531754 nova_compute[254670]: <domainCapabilities>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <domain>kvm</domain>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <arch>i686</arch>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <vcpu max='240'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <iothreads supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <os supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <enum name='firmware'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <loader supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>rom</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pflash</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='readonly'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>yes</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='secure'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </loader>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </os>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <cpu>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='maximumMigratable'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <vendor>AMD</vendor>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='succor'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='custom' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx10'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx10-128'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx10-256'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx10-512'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SierraForest'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Snowridge'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='athlon'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='athlon-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='core2duo'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='core2duo-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='coreduo'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='coreduo-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='n270'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='n270-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='phenom'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='phenom-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </cpu>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <memoryBacking supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <enum name='sourceType'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <value>file</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <value>anonymous</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <value>memfd</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </memoryBacking>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <devices>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <disk supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='diskDevice'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>disk</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>cdrom</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>floppy</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>lun</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>ide</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>fdc</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>sata</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </disk>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <graphics supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vnc</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>egl-headless</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </graphics>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <video supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='modelType'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vga</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>cirrus</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>none</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>bochs</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>ramfb</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </video>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <hostdev supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='mode'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>subsystem</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='startupPolicy'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>mandatory</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>requisite</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>optional</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='subsysType'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pci</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='capsType'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='pciBackend'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </hostdev>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <rng supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>random</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>egd</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </rng>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <filesystem supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='driverType'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>path</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>handle</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>virtiofs</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </filesystem>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <tpm supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>tpm-tis</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>tpm-crb</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>emulator</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>external</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='backendVersion'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>2.0</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </tpm>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <redirdev supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </redirdev>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <channel supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </channel>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <crypto supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='model'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>qemu</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </crypto>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <interface supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='backendType'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>passt</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </interface>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <panic supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>isa</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>hyperv</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </panic>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <console supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>null</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vc</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>dev</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>file</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pipe</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>stdio</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>udp</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>tcp</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>qemu-vdagent</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </console>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </devices>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <gic supported='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <genid supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <backup supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <async-teardown supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <ps2 supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <sev supported='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <sgx supported='no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <hyperv supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='features'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>relaxed</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vapic</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>spinlocks</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vpindex</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>runtime</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>synic</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>stimer</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>reset</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>vendor_id</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>frequencies</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>reenlightenment</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>tlbflush</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>ipi</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>avic</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>emsr_bitmap</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>xmm_input</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <defaults>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </defaults>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </hyperv>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <launchSecurity supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='sectype'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>tdx</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </launchSecurity>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </features>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: </domainCapabilities>
Nov 22 00:45:05 np0005531754 nova_compute[254670]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:05 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.928 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 00:45:05 np0005531754 nova_compute[254670]: <domainCapabilities>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <domain>kvm</domain>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <arch>i686</arch>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <vcpu max='4096'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <iothreads supported='yes'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <os supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <enum name='firmware'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <loader supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>rom</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>pflash</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='readonly'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>yes</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='secure'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </loader>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  </os>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:  <cpu>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <enum name='maximumMigratable'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <vendor>AMD</vendor>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='succor'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:    <mode name='custom' supported='yes'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v1'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v3'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:05 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-128'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-256'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-512'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 podman[255373]: 2025-11-22 05:45:06.026383531 +0000 UTC m=+0.113874537 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </cpu>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <memoryBacking supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <enum name='sourceType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>anonymous</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>memfd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </memoryBacking>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <disk supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='diskDevice'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>disk</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cdrom</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>floppy</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>lun</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>fdc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>sata</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </disk>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <graphics supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vnc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egl-headless</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </graphics>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <video supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='modelType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vga</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cirrus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>none</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>bochs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ramfb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </video>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hostdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='mode'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>subsystem</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='startupPolicy'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>mandatory</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>requisite</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>optional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='subsysType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pci</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='capsType'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='pciBackend'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hostdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <rng supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>random</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </rng>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <filesystem supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='driverType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>path</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>handle</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtiofs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </filesystem>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <tpm supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-tis</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-crb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emulator</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>external</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendVersion'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>2.0</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </tpm>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <redirdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </redirdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <channel supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </channel>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <crypto supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </crypto>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <interface supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>passt</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </interface>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <panic supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>isa</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>hyperv</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </panic>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <console supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>null</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dev</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pipe</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stdio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>udp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tcp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu-vdagent</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </console>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <gic supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <genid supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backup supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <async-teardown supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <ps2 supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sev supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sgx supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hyperv supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='features'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>relaxed</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vapic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>spinlocks</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vpindex</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>runtime</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>synic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stimer</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reset</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vendor_id</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>frequencies</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reenlightenment</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tlbflush</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ipi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>avic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emsr_bitmap</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>xmm_input</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hyperv>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <launchSecurity supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='sectype'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tdx</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </launchSecurity>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: </domainCapabilities>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.955 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:05.960 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 00:45:06 np0005531754 nova_compute[254670]: <domainCapabilities>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <domain>kvm</domain>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <arch>x86_64</arch>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <vcpu max='240'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <iothreads supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <os supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <enum name='firmware'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <loader supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>rom</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pflash</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='readonly'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>yes</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='secure'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </loader>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </os>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <cpu>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='maximumMigratable'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <vendor>AMD</vendor>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='succor'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='custom' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-128'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-256'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-512'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </cpu>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <memoryBacking supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <enum name='sourceType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>anonymous</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>memfd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </memoryBacking>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <disk supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='diskDevice'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>disk</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cdrom</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>floppy</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>lun</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ide</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>fdc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>sata</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </disk>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <graphics supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vnc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egl-headless</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </graphics>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <video supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='modelType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vga</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cirrus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>none</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>bochs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ramfb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </video>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hostdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='mode'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>subsystem</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='startupPolicy'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>mandatory</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>requisite</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>optional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='subsysType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pci</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='capsType'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='pciBackend'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hostdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <rng supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>random</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </rng>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <filesystem supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='driverType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>path</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>handle</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtiofs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </filesystem>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <tpm supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-tis</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-crb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emulator</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>external</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendVersion'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>2.0</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </tpm>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <redirdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </redirdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <channel supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </channel>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <crypto supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </crypto>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <interface supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>passt</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </interface>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <panic supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>isa</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>hyperv</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </panic>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <console supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>null</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dev</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pipe</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stdio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>udp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tcp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu-vdagent</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </console>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <gic supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <genid supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backup supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <async-teardown supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <ps2 supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sev supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sgx supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hyperv supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='features'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>relaxed</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vapic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>spinlocks</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vpindex</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>runtime</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>synic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stimer</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reset</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vendor_id</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>frequencies</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reenlightenment</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tlbflush</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ipi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>avic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emsr_bitmap</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>xmm_input</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hyperv>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <launchSecurity supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='sectype'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tdx</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </launchSecurity>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: </domainCapabilities>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.045 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 00:45:06 np0005531754 nova_compute[254670]: <domainCapabilities>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <domain>kvm</domain>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <arch>x86_64</arch>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <vcpu max='4096'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <iothreads supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <os supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <enum name='firmware'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>efi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <loader supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>rom</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pflash</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='readonly'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>yes</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='secure'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>yes</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>no</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </loader>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </os>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <cpu>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='maximumMigratable'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>on</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>off</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <vendor>AMD</vendor>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='succor'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <mode name='custom' supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Denverton-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='auto-ibrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amd-psfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='stibp-always-on'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='EPYC-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-128'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-256'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx10-512'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='prefetchiti'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Haswell-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512er'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512pf'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fma4'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tbm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xop'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='amx-tile'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-bf16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-fp16'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bitalg'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrc'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fzrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='la57'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='taa-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xfd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ifma'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cmpccxadd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fbsdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='fsrs'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ibrs-all'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mcdt-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pbrsb-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='psdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='serialize'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vaes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='hle'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='rtm'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512bw'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512cd'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512dq'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512f'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='avx512vl'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='invpcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pcid'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='pku'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='mpx'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='core-capability'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='split-lock-detect'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='cldemote'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='erms'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='gfni'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdir64b'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='movdiri'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='xsaves'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='athlon-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='core2duo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='coreduo-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='n270-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='ss'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <blockers model='phenom-v1'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnow'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <feature name='3dnowext'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </blockers>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </mode>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </cpu>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <memoryBacking supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <enum name='sourceType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>anonymous</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <value>memfd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </memoryBacking>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <disk supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='diskDevice'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>disk</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cdrom</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>floppy</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>lun</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>fdc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>sata</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </disk>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <graphics supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vnc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egl-headless</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </graphics>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <video supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='modelType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vga</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>cirrus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>none</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>bochs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ramfb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </video>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hostdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='mode'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>subsystem</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='startupPolicy'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>mandatory</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>requisite</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>optional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='subsysType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pci</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>scsi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='capsType'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='pciBackend'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hostdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <rng supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtio-non-transitional</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>random</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>egd</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </rng>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <filesystem supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='driverType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>path</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>handle</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>virtiofs</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </filesystem>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <tpm supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-tis</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tpm-crb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emulator</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>external</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendVersion'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>2.0</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </tpm>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <redirdev supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='bus'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>usb</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </redirdev>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <channel supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </channel>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <crypto supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendModel'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>builtin</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </crypto>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <interface supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='backendType'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>default</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>passt</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </interface>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <panic supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='model'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>isa</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>hyperv</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </panic>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <console supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='type'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>null</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vc</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pty</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dev</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>file</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>pipe</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stdio</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>udp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tcp</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>unix</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>qemu-vdagent</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>dbus</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </console>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </devices>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  <features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <gic supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <genid supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <backup supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <async-teardown supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <ps2 supported='yes'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sev supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <sgx supported='no'/>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <hyperv supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='features'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>relaxed</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vapic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>spinlocks</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vpindex</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>runtime</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>synic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>stimer</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reset</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>vendor_id</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>frequencies</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>reenlightenment</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tlbflush</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>ipi</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>avic</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>emsr_bitmap</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>xmm_input</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </defaults>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </hyperv>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    <launchSecurity supported='yes'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      <enum name='sectype'>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:        <value>tdx</value>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:      </enum>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:    </launchSecurity>
Nov 22 00:45:06 np0005531754 nova_compute[254670]:  </features>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: </domainCapabilities>
Nov 22 00:45:06 np0005531754 nova_compute[254670]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.159 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 INFO nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Secure Boot support detected#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.163 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.164 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.181 254674 DEBUG nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.224 254674 INFO nova.virt.node [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.246 254674 WARNING nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Compute nodes ['7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.296 254674 INFO nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.339 254674 WARNING nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.341 254674 DEBUG nova.compute.resource_tracker [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.341 254674 DEBUG oslo_concurrency.processutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:45:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:45:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128570508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.807 254674 DEBUG oslo_concurrency.processutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:45:06 np0005531754 python3.9[255574]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 00:45:06 np0005531754 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 00:45:06 np0005531754 systemd[1]: Stopping nova_compute container...
Nov 22 00:45:06 np0005531754 systemd[1]: Started libvirt nodedev daemon.
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.959 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.960 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 00:45:06 np0005531754 nova_compute[254670]: 2025-11-22 05:45:06.960 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 00:45:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:07 np0005531754 virtqemud[255182]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 00:45:07 np0005531754 virtqemud[255182]: hostname: compute-0
Nov 22 00:45:07 np0005531754 virtqemud[255182]: End of file while reading data: Input/output error
Nov 22 00:45:07 np0005531754 systemd[1]: libpod-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d.scope: Deactivated successfully.
Nov 22 00:45:07 np0005531754 systemd[1]: libpod-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d.scope: Consumed 3.518s CPU time.
Nov 22 00:45:07 np0005531754 podman[255591]: 2025-11-22 05:45:07.41208499 +0000 UTC m=+0.500407098 container died 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 22 00:45:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d-userdata-shm.mount: Deactivated successfully.
Nov 22 00:45:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203-merged.mount: Deactivated successfully.
Nov 22 00:45:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:08 np0005531754 podman[255591]: 2025-11-22 05:45:08.584300635 +0000 UTC m=+1.672622703 container cleanup 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:45:08 np0005531754 podman[255591]: nova_compute
Nov 22 00:45:08 np0005531754 podman[255631]: nova_compute
Nov 22 00:45:08 np0005531754 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 00:45:08 np0005531754 systemd[1]: Stopped nova_compute container.
Nov 22 00:45:08 np0005531754 systemd[1]: Starting nova_compute container...
Nov 22 00:45:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:08 np0005531754 podman[255644]: 2025-11-22 05:45:08.870445946 +0000 UTC m=+0.150900456 container init 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm)
Nov 22 00:45:08 np0005531754 podman[255644]: 2025-11-22 05:45:08.880959637 +0000 UTC m=+0.161414117 container start 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:45:08 np0005531754 podman[255644]: nova_compute
Nov 22 00:45:08 np0005531754 nova_compute[255660]: + sudo -E kolla_set_configs
Nov 22 00:45:08 np0005531754 systemd[1]: Started nova_compute container.
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Validating config file
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying service configuration files
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /etc/ceph
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Creating directory /etc/ceph
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Writing out command to execute
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:08 np0005531754 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 00:45:08 np0005531754 nova_compute[255660]: ++ cat /run_command
Nov 22 00:45:08 np0005531754 nova_compute[255660]: + CMD=nova-compute
Nov 22 00:45:08 np0005531754 nova_compute[255660]: + ARGS=
Nov 22 00:45:08 np0005531754 nova_compute[255660]: + sudo kolla_copy_cacerts
Nov 22 00:45:09 np0005531754 nova_compute[255660]: + [[ ! -n '' ]]
Nov 22 00:45:09 np0005531754 nova_compute[255660]: + . kolla_extend_start
Nov 22 00:45:09 np0005531754 nova_compute[255660]: Running command: 'nova-compute'
Nov 22 00:45:09 np0005531754 nova_compute[255660]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 00:45:09 np0005531754 nova_compute[255660]: + umask 0022
Nov 22 00:45:09 np0005531754 nova_compute[255660]: + exec nova-compute
Nov 22 00:45:09 np0005531754 python3.9[255823]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 00:45:10 np0005531754 systemd[1]: Started libpod-conmon-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope.
Nov 22 00:45:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:10 np0005531754 podman[255849]: 2025-11-22 05:45:10.198031864 +0000 UTC m=+0.171447453 container init dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible)
Nov 22 00:45:10 np0005531754 podman[255849]: 2025-11-22 05:45:10.210084676 +0000 UTC m=+0.183500225 container start dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 00:45:10 np0005531754 python3.9[255823]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 00:45:10 np0005531754 nova_compute_init[255870]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 00:45:10 np0005531754 systemd[1]: libpod-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope: Deactivated successfully.
Nov 22 00:45:10 np0005531754 podman[255871]: 2025-11-22 05:45:10.300830837 +0000 UTC m=+0.046696267 container died dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:45:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360-merged.mount: Deactivated successfully.
Nov 22 00:45:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556-userdata-shm.mount: Deactivated successfully.
Nov 22 00:45:10 np0005531754 podman[255881]: 2025-11-22 05:45:10.363846157 +0000 UTC m=+0.054575966 container cleanup dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true)
Nov 22 00:45:10 np0005531754 systemd[1]: libpod-conmon-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope: Deactivated successfully.
Nov 22 00:45:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:10 np0005531754 systemd[1]: session-49.scope: Deactivated successfully.
Nov 22 00:45:10 np0005531754 systemd[1]: session-49.scope: Consumed 2min 44.941s CPU time.
Nov 22 00:45:10 np0005531754 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Nov 22 00:45:10 np0005531754 systemd-logind[798]: Removed session 49.
Nov 22 00:45:10 np0005531754 nova_compute[255660]: 2025-11-22 05:45:10.964 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:10 np0005531754 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:10 np0005531754 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 00:45:10 np0005531754 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.093 255664 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.119 255664 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.120 255664 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.680 255664 INFO nova.virt.driver [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.792 255664 INFO nova.compute.provider_config [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 WARNING oslo_config.cfg [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 00:45:11 np0005531754 nova_compute[255660]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 00:45:11 np0005531754 nova_compute[255660]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 00:45:11 np0005531754 nova_compute[255660]: and ``live_migration_inbound_addr`` respectively.
Nov 22 00:45:11 np0005531754 nova_compute[255660]: ).  Its value may be silently ignored in the future.#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_secret_uuid        = 13fdadc6-d566-5465-9ac8-a148ef130da1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.945 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.945 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.946 255664 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.965 255664 INFO nova.virt.node [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.965 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.979 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f21ba2ea550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.982 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f21ba2ea550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.983 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.988 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  <host>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <uuid>66851c39-840f-46c8-adfc-77dc6a7d91a4</uuid>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <cpu>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <arch>x86_64</arch>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <model>EPYC-Rome-v4</model>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <vendor>AMD</vendor>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <microcode version='16777317'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <signature family='23' model='49' stepping='0'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='x2apic'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='tsc-deadline'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='osxsave'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='hypervisor'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='tsc_adjust'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='spec-ctrl'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='stibp'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='arch-capabilities'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='ssbd'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='cmp_legacy'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='topoext'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='virt-ssbd'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='lbrv'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='tsc-scale'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='vmcb-clean'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='pause-filter'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='pfthreshold'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='svme-addr-chk'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='rdctl-no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='skip-l1dfl-vmentry'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='mds-no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <feature name='pschange-mc-no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <pages unit='KiB' size='4'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <pages unit='KiB' size='2048'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <pages unit='KiB' size='1048576'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </cpu>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <power_management>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <suspend_mem/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </power_management>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <iommu support='no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <migration_features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <live/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <uri_transports>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:        <uri_transport>tcp</uri_transport>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:        <uri_transport>rdma</uri_transport>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      </uri_transports>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </migration_features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <topology>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <cells num='1'>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:        <cell id='0'>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <memory unit='KiB'>7864320</memory>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <pages unit='KiB' size='2048'>0</pages>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <distances>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <sibling id='0' value='10'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          </distances>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          <cpus num='8'>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:          </cpus>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:        </cell>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      </cells>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </topology>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <cache>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </cache>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <secmodel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <model>selinux</model>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <doi>0</doi>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </secmodel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <secmodel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <model>dac</model>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <doi>0</doi>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </secmodel>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  </host>
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  <guest>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <os_type>hvm</os_type>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <arch name='i686'>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <wordsize>32</wordsize>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <domain type='qemu'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <domain type='kvm'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </arch>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <pae/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <nonpae/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <acpi default='on' toggle='yes'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <apic default='on' toggle='no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <cpuselection/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <deviceboot/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <disksnapshot default='on' toggle='no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <externalSnapshot/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  </guest>
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  <guest>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <os_type>hvm</os_type>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <arch name='x86_64'>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <wordsize>64</wordsize>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <domain type='qemu'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <domain type='kvm'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </arch>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    <features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <acpi default='on' toggle='yes'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <apic default='on' toggle='no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <cpuselection/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <deviceboot/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <disksnapshot default='on' toggle='no'/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:      <externalSnapshot/>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:    </features>
Nov 22 00:45:11 np0005531754 nova_compute[255660]:  </guest>
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 
Nov 22 00:45:11 np0005531754 nova_compute[255660]: </capabilities>
Nov 22 00:45:11 np0005531754 nova_compute[255660]: #033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.996 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 00:45:11 np0005531754 nova_compute[255660]: 2025-11-22 05:45:11.998 255664 DEBUG nova.virt.libvirt.volume.mount [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.002 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 00:45:12 np0005531754 nova_compute[255660]: <domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <domain>kvm</domain>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <arch>i686</arch>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <vcpu max='240'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <iothreads supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <os supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='firmware'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <loader supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>rom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pflash</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='readonly'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>yes</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='secure'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </loader>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </os>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='maximumMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <vendor>AMD</vendor>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='succor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='custom' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-128'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-256'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-512'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <memoryBacking supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='sourceType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>anonymous</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>memfd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </memoryBacking>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <disk supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='diskDevice'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>disk</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cdrom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>floppy</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>lun</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ide</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>fdc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>sata</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </disk>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <graphics supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vnc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egl-headless</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </graphics>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <video supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='modelType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vga</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cirrus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>none</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>bochs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ramfb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </video>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hostdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='mode'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>subsystem</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='startupPolicy'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>mandatory</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>requisite</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>optional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='subsysType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pci</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='capsType'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='pciBackend'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hostdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <rng supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>random</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </rng>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <filesystem supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='driverType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>path</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>handle</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtiofs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </filesystem>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <tpm supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-tis</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-crb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emulator</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>external</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendVersion'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>2.0</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </tpm>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <redirdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </redirdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <channel supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </channel>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <crypto supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </crypto>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <interface supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>passt</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </interface>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <panic supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>isa</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>hyperv</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </panic>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <console supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>null</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dev</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pipe</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stdio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>udp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tcp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu-vdagent</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </console>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <gic supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <genid supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backup supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <async-teardown supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <ps2 supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sev supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sgx supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hyperv supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='features'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>relaxed</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vapic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>spinlocks</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vpindex</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>runtime</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>synic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stimer</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reset</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vendor_id</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>frequencies</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reenlightenment</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tlbflush</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ipi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>avic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emsr_bitmap</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>xmm_input</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hyperv>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <launchSecurity supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='sectype'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tdx</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </launchSecurity>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: </domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.010 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 00:45:12 np0005531754 nova_compute[255660]: <domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <domain>kvm</domain>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <arch>i686</arch>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <vcpu max='4096'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <iothreads supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <os supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='firmware'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <loader supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>rom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pflash</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='readonly'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>yes</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='secure'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </loader>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </os>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='maximumMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <vendor>AMD</vendor>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='succor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='custom' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-128'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-256'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-512'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <memoryBacking supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='sourceType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>anonymous</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>memfd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </memoryBacking>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <disk supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='diskDevice'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>disk</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cdrom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>floppy</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>lun</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>fdc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>sata</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </disk>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <graphics supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vnc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egl-headless</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </graphics>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <video supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='modelType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vga</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cirrus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>none</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>bochs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ramfb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </video>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hostdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='mode'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>subsystem</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='startupPolicy'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>mandatory</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>requisite</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>optional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='subsysType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pci</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='capsType'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='pciBackend'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hostdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <rng supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>random</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </rng>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <filesystem supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='driverType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>path</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>handle</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtiofs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </filesystem>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <tpm supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-tis</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-crb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emulator</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>external</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendVersion'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>2.0</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </tpm>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <redirdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </redirdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <channel supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </channel>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <crypto supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </crypto>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <interface supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>passt</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </interface>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <panic supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>isa</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>hyperv</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </panic>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <console supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>null</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dev</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pipe</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stdio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>udp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tcp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu-vdagent</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </console>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <gic supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <genid supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backup supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <async-teardown supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <ps2 supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sev supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sgx supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hyperv supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='features'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>relaxed</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vapic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>spinlocks</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vpindex</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>runtime</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>synic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stimer</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reset</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vendor_id</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>frequencies</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reenlightenment</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tlbflush</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ipi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>avic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emsr_bitmap</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>xmm_input</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hyperv>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <launchSecurity supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='sectype'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tdx</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </launchSecurity>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: </domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.032 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.038 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 00:45:12 np0005531754 nova_compute[255660]: <domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <domain>kvm</domain>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <arch>x86_64</arch>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <vcpu max='240'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <iothreads supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <os supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='firmware'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <loader supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>rom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pflash</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='readonly'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>yes</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='secure'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </loader>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </os>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='maximumMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <vendor>AMD</vendor>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='succor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='custom' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-128'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-256'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-512'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <memoryBacking supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='sourceType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>anonymous</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>memfd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </memoryBacking>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <disk supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='diskDevice'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>disk</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cdrom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>floppy</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>lun</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ide</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>fdc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>sata</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </disk>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <graphics supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vnc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egl-headless</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </graphics>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <video supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='modelType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vga</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cirrus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>none</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>bochs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ramfb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </video>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hostdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='mode'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>subsystem</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='startupPolicy'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>mandatory</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>requisite</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>optional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='subsysType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pci</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='capsType'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='pciBackend'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hostdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <rng supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>random</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </rng>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <filesystem supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='driverType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>path</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>handle</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtiofs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </filesystem>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <tpm supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-tis</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-crb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emulator</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>external</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendVersion'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>2.0</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </tpm>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <redirdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </redirdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <channel supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </channel>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <crypto supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </crypto>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <interface supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>passt</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </interface>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <panic supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>isa</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>hyperv</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </panic>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <console supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>null</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dev</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pipe</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stdio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>udp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tcp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu-vdagent</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </console>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <gic supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <genid supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backup supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <async-teardown supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <ps2 supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sev supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sgx supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hyperv supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='features'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>relaxed</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vapic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>spinlocks</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vpindex</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>runtime</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>synic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stimer</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reset</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vendor_id</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>frequencies</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reenlightenment</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tlbflush</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ipi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>avic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emsr_bitmap</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>xmm_input</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hyperv>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <launchSecurity supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='sectype'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tdx</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </launchSecurity>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: </domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.100 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 00:45:12 np0005531754 nova_compute[255660]: <domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <domain>kvm</domain>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <arch>x86_64</arch>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <vcpu max='4096'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <iothreads supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <os supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='firmware'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>efi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <loader supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>rom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pflash</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='readonly'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>yes</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='secure'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>yes</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>no</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </loader>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </os>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-passthrough' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='hostPassthroughMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='maximum' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='maximumMigratable'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>on</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>off</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='host-model' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <vendor>AMD</vendor>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='x2apic'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='hypervisor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='stibp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='overflow-recov'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='succor'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lbrv'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='tsc-scale'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='flushbyasid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pause-filter'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='pfthreshold'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <feature policy='disable' name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <mode name='custom' supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Broadwell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Cooperlake-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Denverton-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Dhyana-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='auto-ibrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Milan-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amd-psfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='no-nested-data-bp'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='null-sel-clr-base'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='stibp-always-on'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-Rome-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='EPYC-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='GraniteRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-128'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-256'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx10-512'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='prefetchiti'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Haswell-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v6'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Icelake-Server-v7'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='IvyBridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='KnightsMill-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4fmaps'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-4vnniw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512er'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512pf'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G4-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Opteron_G5-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fma4'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tbm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xop'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SapphireRapids-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='amx-tile'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-bf16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-fp16'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512-vpopcntdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bitalg'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vbmi2'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrc'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fzrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='la57'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='taa-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='tsx-ldtrk'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xfd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='SierraForest-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ifma'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-ne-convert'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx-vnni-int8'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='bus-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cmpccxadd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fbsdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='fsrs'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ibrs-all'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mcdt-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pbrsb-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='psdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='sbdr-ssdp-no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='serialize'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vaes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='vpclmulqdq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Client-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='hle'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='rtm'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Skylake-Server-v5'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512bw'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512cd'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512dq'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512f'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='avx512vl'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='invpcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pcid'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='pku'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='mpx'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v2'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v3'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='core-capability'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='split-lock-detect'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='Snowridge-v4'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='cldemote'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='erms'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='gfni'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdir64b'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='movdiri'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='xsaves'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='athlon-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='core2duo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='coreduo-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='n270-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='ss'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <blockers model='phenom-v1'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnow'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <feature name='3dnowext'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </blockers>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </mode>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </cpu>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <memoryBacking supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <enum name='sourceType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>anonymous</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <value>memfd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </memoryBacking>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <disk supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='diskDevice'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>disk</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cdrom</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>floppy</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>lun</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>fdc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>sata</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </disk>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <graphics supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vnc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egl-headless</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </graphics>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <video supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='modelType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vga</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>cirrus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>none</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>bochs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ramfb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </video>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hostdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='mode'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>subsystem</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='startupPolicy'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>mandatory</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>requisite</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>optional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='subsysType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pci</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>scsi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='capsType'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='pciBackend'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hostdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <rng supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtio-non-transitional</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>random</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>egd</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </rng>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <filesystem supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='driverType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>path</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>handle</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>virtiofs</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </filesystem>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <tpm supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-tis</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tpm-crb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emulator</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>external</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendVersion'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>2.0</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </tpm>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <redirdev supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='bus'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>usb</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </redirdev>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <channel supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </channel>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <crypto supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendModel'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>builtin</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </crypto>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <interface supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='backendType'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>default</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>passt</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </interface>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <panic supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='model'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>isa</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>hyperv</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </panic>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <console supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='type'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>null</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vc</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pty</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dev</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>file</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>pipe</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stdio</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>udp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tcp</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>unix</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>qemu-vdagent</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>dbus</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </console>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </devices>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  <features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <gic supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <vmcoreinfo supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <genid supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backingStoreInput supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <backup supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <async-teardown supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <ps2 supported='yes'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sev supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <sgx supported='no'/>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <hyperv supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='features'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>relaxed</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vapic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>spinlocks</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vpindex</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>runtime</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>synic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>stimer</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reset</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>vendor_id</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>frequencies</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>reenlightenment</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tlbflush</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>ipi</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>avic</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>emsr_bitmap</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>xmm_input</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <spinlocks>4095</spinlocks>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <stimer_direct>on</stimer_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </defaults>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </hyperv>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    <launchSecurity supported='yes'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      <enum name='sectype'>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:        <value>tdx</value>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:      </enum>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:    </launchSecurity>
Nov 22 00:45:12 np0005531754 nova_compute[255660]:  </features>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: </domainCapabilities>
Nov 22 00:45:12 np0005531754 nova_compute[255660]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.166 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Secure Boot support detected#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.169 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.170 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.178 255664 DEBUG nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.205 255664 INFO nova.virt.node [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.228 255664 WARNING nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute nodes ['7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.261 255664 INFO nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.278 255664 WARNING nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.279 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.279 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.280 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.280 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.281 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:45:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:45:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508448704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.699 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.882 255664 WARNING nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.898 255664 WARNING nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] No compute node record for compute-0.ctlplane.example.com:7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 could not be found.#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.914 255664 INFO nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.984 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:45:12 np0005531754 nova_compute[255660]: 2025-11-22 05:45:12.984 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:13 np0005531754 nova_compute[255660]: 2025-11-22 05:45:13.943 255664 INFO nova.scheduler.client.report [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] [req-304c1646-fc5e-4a45-b4f8-71dd3059107d] Created resource provider record via placement API for resource provider with UUID 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 and name compute-0.ctlplane.example.com.#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.294 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:45:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:45:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351671620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.733 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.740 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 00:45:14 np0005531754 nova_compute[255660]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.740 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.741 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.742 255664 DEBUG nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.812 255664 DEBUG nova.scheduler.client.report [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updated inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.813 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.813 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 00:45:14 np0005531754 nova_compute[255660]: 2025-11-22 05:45:14.971 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 22 00:45:15 np0005531754 nova_compute[255660]: 2025-11-22 05:45:15.017 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:45:15 np0005531754 nova_compute[255660]: 2025-11-22 05:45:15.018 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:45:15 np0005531754 nova_compute[255660]: 2025-11-22 05:45:15.019 255664 DEBUG nova.service [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 22 00:45:15 np0005531754 nova_compute[255660]: 2025-11-22 05:45:15.412 255664 DEBUG nova.service [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 22 00:45:15 np0005531754 nova_compute[255660]: 2025-11-22 05:45:15.413 255664 DEBUG nova.servicegroup.drivers.db [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 22 00:45:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:17 np0005531754 podman[256000]: 2025-11-22 05:45:17.23962222 +0000 UTC m=+0.093737291 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:45:17 np0005531754 podman[256020]: 2025-11-22 05:45:17.337339026 +0000 UTC m=+0.064552093 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:45:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:31 np0005531754 nova_compute[255660]: 2025-11-22 05:45:31.415 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:45:31 np0005531754 nova_compute[255660]: 2025-11-22 05:45:31.449 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:45:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:45:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:45:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:45:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:45:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:45:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:45:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:36 np0005531754 podman[256038]: 2025-11-22 05:45:36.282744122 +0000 UTC m=+0.147568236 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:45:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:45:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:45:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:45:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:45:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:45:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 265f5846-4d0f-45fb-8dad-b8aa3320248f does not exist
Nov 22 00:45:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4d529e8a-b83c-43db-a206-d7ea20f58536 does not exist
Nov 22 00:45:39 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 18602d0c-fed7-4420-ac7b-53346e88b9d8 does not exist
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.075465729 +0000 UTC m=+0.058375757 container create a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:45:40 np0005531754 systemd[1]: Started libpod-conmon-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope.
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.048535231 +0000 UTC m=+0.031445289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.196823976 +0000 UTC m=+0.179734024 container init a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.204813249 +0000 UTC m=+0.187723287 container start a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.209987147 +0000 UTC m=+0.192897225 container attach a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:45:40 np0005531754 musing_khorana[256471]: 167 167
Nov 22 00:45:40 np0005531754 systemd[1]: libpod-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope: Deactivated successfully.
Nov 22 00:45:40 np0005531754 conmon[256471]: conmon a1990ebd488b8ba07348 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope/container/memory.events
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.212178486 +0000 UTC m=+0.195088524 container died a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:45:40 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f65825b4d482a7168459bfd24d952b714823efeaf398e00ca899008346aa8608-merged.mount: Deactivated successfully.
Nov 22 00:45:40 np0005531754 podman[256455]: 2025-11-22 05:45:40.285556042 +0000 UTC m=+0.268466090 container remove a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 00:45:40 np0005531754 systemd[1]: libpod-conmon-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope: Deactivated successfully.
Nov 22 00:45:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:40 np0005531754 podman[256495]: 2025-11-22 05:45:40.554586718 +0000 UTC m=+0.089325653 container create a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:45:40 np0005531754 podman[256495]: 2025-11-22 05:45:40.498978795 +0000 UTC m=+0.033717770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:40 np0005531754 systemd[1]: Started libpod-conmon-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope.
Nov 22 00:45:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:40 np0005531754 podman[256495]: 2025-11-22 05:45:40.721204502 +0000 UTC m=+0.255943447 container init a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:45:40 np0005531754 podman[256495]: 2025-11-22 05:45:40.730382387 +0000 UTC m=+0.265121312 container start a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:45:40 np0005531754 podman[256495]: 2025-11-22 05:45:40.747512343 +0000 UTC m=+0.282251288 container attach a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:45:41 np0005531754 gifted_aryabhata[256512]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:45:41 np0005531754 gifted_aryabhata[256512]: --> relative data size: 1.0
Nov 22 00:45:41 np0005531754 gifted_aryabhata[256512]: --> All data devices are unavailable
Nov 22 00:45:41 np0005531754 systemd[1]: libpod-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Deactivated successfully.
Nov 22 00:45:41 np0005531754 systemd[1]: libpod-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Consumed 1.055s CPU time.
Nov 22 00:45:41 np0005531754 podman[256541]: 2025-11-22 05:45:41.895530413 +0000 UTC m=+0.027796623 container died a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:45:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f-merged.mount: Deactivated successfully.
Nov 22 00:45:41 np0005531754 podman[256541]: 2025-11-22 05:45:41.983217972 +0000 UTC m=+0.115484142 container remove a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:45:41 np0005531754 systemd[1]: libpod-conmon-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Deactivated successfully.
Nov 22 00:45:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:45:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5626 writes, 23K keys, 5626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5626 writes, 880 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 00:45:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.785585292 +0000 UTC m=+0.072764792 container create f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:45:42 np0005531754 systemd[1]: Started libpod-conmon-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope.
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.757566154 +0000 UTC m=+0.044745704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:42 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.868234056 +0000 UTC m=+0.155413546 container init f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.878978073 +0000 UTC m=+0.166157573 container start f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:45:42 np0005531754 kind_edison[256712]: 167 167
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.884058588 +0000 UTC m=+0.171238048 container attach f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:45:42 np0005531754 systemd[1]: libpod-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope: Deactivated successfully.
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.885095506 +0000 UTC m=+0.172275036 container died f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:45:42 np0005531754 systemd[1]: var-lib-containers-storage-overlay-58f0c7c48a6378898dd9c3df82cfa8b52ae6c9c846ec0e844651cc4abb95b4c2-merged.mount: Deactivated successfully.
Nov 22 00:45:42 np0005531754 podman[256696]: 2025-11-22 05:45:42.932663674 +0000 UTC m=+0.219843144 container remove f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:45:42 np0005531754 systemd[1]: libpod-conmon-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope: Deactivated successfully.
Nov 22 00:45:43 np0005531754 podman[256735]: 2025-11-22 05:45:43.12334848 +0000 UTC m=+0.056737694 container create b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:45:43 np0005531754 systemd[1]: Started libpod-conmon-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope.
Nov 22 00:45:43 np0005531754 podman[256735]: 2025-11-22 05:45:43.099625657 +0000 UTC m=+0.033014851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:43 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:43 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:43 np0005531754 podman[256735]: 2025-11-22 05:45:43.240112874 +0000 UTC m=+0.173502068 container init b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:45:43 np0005531754 podman[256735]: 2025-11-22 05:45:43.247019608 +0000 UTC m=+0.180408782 container start b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:45:43 np0005531754 podman[256735]: 2025-11-22 05:45:43.251066396 +0000 UTC m=+0.184455600 container attach b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:45:43
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes']
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:45:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]: {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    "0": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "devices": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "/dev/loop3"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            ],
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_name": "ceph_lv0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_size": "21470642176",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "name": "ceph_lv0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "tags": {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_name": "ceph",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.crush_device_class": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.encrypted": "0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_id": "0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.vdo": "0"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            },
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "vg_name": "ceph_vg0"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        }
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    ],
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    "1": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "devices": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "/dev/loop4"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            ],
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_name": "ceph_lv1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_size": "21470642176",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "name": "ceph_lv1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "tags": {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_name": "ceph",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.crush_device_class": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.encrypted": "0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_id": "1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.vdo": "0"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            },
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "vg_name": "ceph_vg1"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        }
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    ],
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    "2": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "devices": [
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "/dev/loop5"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            ],
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_name": "ceph_lv2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_size": "21470642176",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "name": "ceph_lv2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "tags": {
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.cluster_name": "ceph",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.crush_device_class": "",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.encrypted": "0",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osd_id": "2",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:                "ceph.vdo": "0"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            },
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "type": "block",
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:            "vg_name": "ceph_vg2"
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:        }
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]:    ]
Nov 22 00:45:44 np0005531754 youthful_brattain[256751]: }
Nov 22 00:45:44 np0005531754 systemd[1]: libpod-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope: Deactivated successfully.
Nov 22 00:45:44 np0005531754 podman[256735]: 2025-11-22 05:45:44.054561257 +0000 UTC m=+0.987950431 container died b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:45:44 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4-merged.mount: Deactivated successfully.
Nov 22 00:45:44 np0005531754 podman[256735]: 2025-11-22 05:45:44.119418676 +0000 UTC m=+1.052807850 container remove b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:45:44 np0005531754 systemd[1]: libpod-conmon-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope: Deactivated successfully.
Nov 22 00:45:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.788363028 +0000 UTC m=+0.045582276 container create b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:45:44 np0005531754 systemd[1]: Started libpod-conmon-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope.
Nov 22 00:45:44 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.767907793 +0000 UTC m=+0.025127011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.884911003 +0000 UTC m=+0.142130221 container init b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.891866509 +0000 UTC m=+0.149085737 container start b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:45:44 np0005531754 nervous_torvalds[256930]: 167 167
Nov 22 00:45:44 np0005531754 systemd[1]: libpod-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope: Deactivated successfully.
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.895855235 +0000 UTC m=+0.153074453 container attach b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.899834981 +0000 UTC m=+0.157054189 container died b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:45:44 np0005531754 systemd[1]: var-lib-containers-storage-overlay-49c92bfb74998819e55c4833661c832a638674285dafd6f52934489e0e23087c-merged.mount: Deactivated successfully.
Nov 22 00:45:44 np0005531754 podman[256914]: 2025-11-22 05:45:44.936313884 +0000 UTC m=+0.193533082 container remove b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 00:45:44 np0005531754 systemd[1]: libpod-conmon-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope: Deactivated successfully.
Nov 22 00:45:45 np0005531754 podman[256952]: 2025-11-22 05:45:45.145259087 +0000 UTC m=+0.064684226 container create ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:45:45 np0005531754 systemd[1]: Started libpod-conmon-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope.
Nov 22 00:45:45 np0005531754 podman[256952]: 2025-11-22 05:45:45.117354013 +0000 UTC m=+0.036779152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:45:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:45:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:45:45 np0005531754 podman[256952]: 2025-11-22 05:45:45.248268845 +0000 UTC m=+0.167693964 container init ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:45:45 np0005531754 podman[256952]: 2025-11-22 05:45:45.259686289 +0000 UTC m=+0.179111418 container start ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:45:45 np0005531754 podman[256952]: 2025-11-22 05:45:45.266604174 +0000 UTC m=+0.186029283 container attach ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]: {
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_id": 1,
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "type": "bluestore"
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    },
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_id": 2,
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "type": "bluestore"
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    },
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_id": 0,
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:        "type": "bluestore"
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]:    }
Nov 22 00:45:46 np0005531754 nervous_wescoff[256970]: }
Nov 22 00:45:46 np0005531754 systemd[1]: libpod-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Deactivated successfully.
Nov 22 00:45:46 np0005531754 systemd[1]: libpod-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Consumed 1.068s CPU time.
Nov 22 00:45:46 np0005531754 podman[256952]: 2025-11-22 05:45:46.321025266 +0000 UTC m=+1.240450395 container died ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:45:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346-merged.mount: Deactivated successfully.
Nov 22 00:45:46 np0005531754 podman[256952]: 2025-11-22 05:45:46.606623444 +0000 UTC m=+1.526048573 container remove ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:45:46 np0005531754 systemd[1]: libpod-conmon-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Deactivated successfully.
Nov 22 00:45:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:45:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:45:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:46 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ebfd965-df5f-40f0-a2b5-d3d03a24ae1e does not exist
Nov 22 00:45:46 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9e1c12af-94bc-4638-a65d-8ca60c609d18 does not exist
Nov 22 00:45:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:45:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 6951 writes, 28K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6951 writes, 1245 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 00:45:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:45:48 np0005531754 podman[257066]: 2025-11-22 05:45:48.220230119 +0000 UTC m=+0.068695103 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 00:45:48 np0005531754 podman[257065]: 2025-11-22 05:45:48.242420541 +0000 UTC m=+0.091473670 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 00:45:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:45:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:45:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:45:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo
Nov 22 00:45:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:45:56 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 00:45:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:45:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:07 np0005531754 podman[257104]: 2025-11-22 05:46:07.25563454 +0000 UTC m=+0.112987432 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.643597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367643663, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1519, "num_deletes": 251, "total_data_size": 2495675, "memory_usage": 2525216, "flush_reason": "Manual Compaction"}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367661235, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2440715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14828, "largest_seqno": 16346, "table_properties": {"data_size": 2433615, "index_size": 4171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14340, "raw_average_key_size": 19, "raw_value_size": 2419448, "raw_average_value_size": 3318, "num_data_blocks": 191, "num_entries": 729, "num_filter_entries": 729, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790203, "oldest_key_time": 1763790203, "file_creation_time": 1763790367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17738 microseconds, and 10591 cpu microseconds.
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.661330) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2440715 bytes OK
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.661373) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663712) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663738) EVENT_LOG_v1 {"time_micros": 1763790367663729, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663770) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2489039, prev total WAL file size 2489039, number of live WAL files 2.
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.665267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2383KB)], [35(6852KB)]
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367665343, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9457323, "oldest_snapshot_seqno": -1}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4003 keys, 7683708 bytes, temperature: kUnknown
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367705161, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7683708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7654650, "index_size": 17940, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97835, "raw_average_key_size": 24, "raw_value_size": 7579911, "raw_average_value_size": 1893, "num_data_blocks": 759, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.705667) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7683708 bytes
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.706887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.1 rd, 192.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4517, records dropped: 514 output_compression: NoCompression
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.706905) EVENT_LOG_v1 {"time_micros": 1763790367706896, "job": 16, "event": "compaction_finished", "compaction_time_micros": 39883, "compaction_time_cpu_micros": 18622, "output_level": 6, "num_output_files": 1, "total_output_size": 7683708, "num_input_records": 4517, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367707406, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367708917, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.665132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:07 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:46:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.132 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.133 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.134 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.150 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.151 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.151 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.152 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.152 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.153 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.153 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.154 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.179 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.180 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.181 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.181 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.182 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:46:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:46:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466684376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:46:11 np0005531754 nova_compute[255660]: 2025-11-22 05:46:11.644 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:46:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:13 np0005531754 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 22 00:46:13 np0005531754 irqbalance[791]: IRQ 26 affinity is now unmanaged
Nov 22 00:46:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 00:46:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2149655122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.027 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.030 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.030 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.031 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.170 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.171 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.213 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:46:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:46:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847256402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.694 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.699 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.716 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.717 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:46:15 np0005531754 nova_compute[255660]: 2025-11-22 05:46:15.717 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:46:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:19 np0005531754 podman[257174]: 2025-11-22 05:46:19.206239349 +0000 UTC m=+0.067398498 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:46:19 np0005531754 podman[257175]: 2025-11-22 05:46:19.221367305 +0000 UTC m=+0.075171837 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:46:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 00:46:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 00:46:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 00:46:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 00:46:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 00:46:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.910 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:46:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.911 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:46:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.911 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:46:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:38 np0005531754 podman[257212]: 2025-11-22 05:46:38.29617817 +0000 UTC m=+0.144155257 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 00:46:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:46:43
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control']
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:46:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:46:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:47 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5b185e99-76ae-414e-9d5c-728ee8fbbdab does not exist
Nov 22 00:46:47 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 21b0d40e-9696-43c6-be87-77b99338a1ac does not exist
Nov 22 00:46:47 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev cbffa9dd-e575-4e6b-ae90-249d99a46d14 does not exist
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:46:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.303495507 +0000 UTC m=+0.043583959 container create 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:46:48 np0005531754 systemd[1]: Started libpod-conmon-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope.
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.280797029 +0000 UTC m=+0.020885461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.404200688 +0000 UTC m=+0.144289190 container init 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.416222461 +0000 UTC m=+0.156310913 container start 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.420113855 +0000 UTC m=+0.160202377 container attach 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:46:48 np0005531754 recursing_kilby[257525]: 167 167
Nov 22 00:46:48 np0005531754 systemd[1]: libpod-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope: Deactivated successfully.
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.425299664 +0000 UTC m=+0.165388076 container died 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:46:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-46b30b747761560fc85b49b5cc29fbf702e04d700d72ab2f037092e1fd3ad09f-merged.mount: Deactivated successfully.
Nov 22 00:46:48 np0005531754 podman[257509]: 2025-11-22 05:46:48.471162884 +0000 UTC m=+0.211251306 container remove 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:46:48 np0005531754 systemd[1]: libpod-conmon-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope: Deactivated successfully.
Nov 22 00:46:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:46:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:46:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:46:48 np0005531754 podman[257549]: 2025-11-22 05:46:48.690522577 +0000 UTC m=+0.056788195 container create eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:46:48 np0005531754 systemd[1]: Started libpod-conmon-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope.
Nov 22 00:46:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:48 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:48 np0005531754 podman[257549]: 2025-11-22 05:46:48.670318754 +0000 UTC m=+0.036584412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:48 np0005531754 podman[257549]: 2025-11-22 05:46:48.77232659 +0000 UTC m=+0.138592248 container init eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:46:48 np0005531754 podman[257549]: 2025-11-22 05:46:48.779804531 +0000 UTC m=+0.146070139 container start eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:46:48 np0005531754 podman[257549]: 2025-11-22 05:46:48.782978206 +0000 UTC m=+0.149243854 container attach eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:46:49 np0005531754 festive_noether[257566]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:46:49 np0005531754 festive_noether[257566]: --> relative data size: 1.0
Nov 22 00:46:49 np0005531754 festive_noether[257566]: --> All data devices are unavailable
Nov 22 00:46:49 np0005531754 systemd[1]: libpod-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Deactivated successfully.
Nov 22 00:46:49 np0005531754 systemd[1]: libpod-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Consumed 1.069s CPU time.
Nov 22 00:46:49 np0005531754 podman[257549]: 2025-11-22 05:46:49.886035896 +0000 UTC m=+1.252301574 container died eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:46:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c-merged.mount: Deactivated successfully.
Nov 22 00:46:49 np0005531754 podman[257549]: 2025-11-22 05:46:49.970524491 +0000 UTC m=+1.336790099 container remove eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:46:49 np0005531754 systemd[1]: libpod-conmon-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Deactivated successfully.
Nov 22 00:46:50 np0005531754 podman[257595]: 2025-11-22 05:46:50.004691308 +0000 UTC m=+0.083288735 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 00:46:50 np0005531754 podman[257603]: 2025-11-22 05:46:50.030420168 +0000 UTC m=+0.107059413 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:46:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.042341634 +0000 UTC m=+0.066828194 container create 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:46:51 np0005531754 systemd[1]: Started libpod-conmon-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope.
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.014389044 +0000 UTC m=+0.038875654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.147778851 +0000 UTC m=+0.172265451 container init 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.160359728 +0000 UTC m=+0.184846288 container start 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.164829188 +0000 UTC m=+0.189315728 container attach 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:46:51 np0005531754 dreamy_perlman[257803]: 167 167
Nov 22 00:46:51 np0005531754 systemd[1]: libpod-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope: Deactivated successfully.
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.168857116 +0000 UTC m=+0.193343686 container died 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:46:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-602f0a0012665a25b52c374a343500ce1061b96959c0d37028ff9ad6fc4361eb-merged.mount: Deactivated successfully.
Nov 22 00:46:51 np0005531754 podman[257787]: 2025-11-22 05:46:51.218184259 +0000 UTC m=+0.242670789 container remove 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:46:51 np0005531754 systemd[1]: libpod-conmon-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope: Deactivated successfully.
Nov 22 00:46:51 np0005531754 podman[257827]: 2025-11-22 05:46:51.45018312 +0000 UTC m=+0.058770017 container create eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:46:51 np0005531754 systemd[1]: Started libpod-conmon-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope.
Nov 22 00:46:51 np0005531754 podman[257827]: 2025-11-22 05:46:51.420680489 +0000 UTC m=+0.029267496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:51 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:51 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:51 np0005531754 podman[257827]: 2025-11-22 05:46:51.571646298 +0000 UTC m=+0.180233235 container init eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:46:51 np0005531754 podman[257827]: 2025-11-22 05:46:51.580728411 +0000 UTC m=+0.189315338 container start eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:46:51 np0005531754 podman[257827]: 2025-11-22 05:46:51.584442611 +0000 UTC m=+0.193029528 container attach eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 22 00:46:52 np0005531754 epic_bose[257843]: {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    "0": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "devices": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "/dev/loop3"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            ],
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_name": "ceph_lv0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_size": "21470642176",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "name": "ceph_lv0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "tags": {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_name": "ceph",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.crush_device_class": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.encrypted": "0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_id": "0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.vdo": "0"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            },
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "vg_name": "ceph_vg0"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        }
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    ],
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    "1": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "devices": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "/dev/loop4"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            ],
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_name": "ceph_lv1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_size": "21470642176",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "name": "ceph_lv1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "tags": {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_name": "ceph",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.crush_device_class": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.encrypted": "0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_id": "1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.vdo": "0"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            },
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "vg_name": "ceph_vg1"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        }
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    ],
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    "2": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "devices": [
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "/dev/loop5"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            ],
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_name": "ceph_lv2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_size": "21470642176",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "name": "ceph_lv2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "tags": {
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.cluster_name": "ceph",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.crush_device_class": "",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.encrypted": "0",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osd_id": "2",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:                "ceph.vdo": "0"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            },
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "type": "block",
Nov 22 00:46:52 np0005531754 epic_bose[257843]:            "vg_name": "ceph_vg2"
Nov 22 00:46:52 np0005531754 epic_bose[257843]:        }
Nov 22 00:46:52 np0005531754 epic_bose[257843]:    ]
Nov 22 00:46:52 np0005531754 epic_bose[257843]: }
Nov 22 00:46:52 np0005531754 systemd[1]: libpod-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope: Deactivated successfully.
Nov 22 00:46:52 np0005531754 podman[257827]: 2025-11-22 05:46:52.367017976 +0000 UTC m=+0.975604863 container died eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:46:52 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368-merged.mount: Deactivated successfully.
Nov 22 00:46:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:52 np0005531754 podman[257827]: 2025-11-22 05:46:52.428139625 +0000 UTC m=+1.036726522 container remove eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:46:52 np0005531754 systemd[1]: libpod-conmon-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope: Deactivated successfully.
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:46:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.155590433 +0000 UTC m=+0.062285322 container create c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:46:53 np0005531754 systemd[1]: Started libpod-conmon-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope.
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.129263567 +0000 UTC m=+0.035958496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.244627141 +0000 UTC m=+0.151321990 container init c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.252358108 +0000 UTC m=+0.159052967 container start c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.256143079 +0000 UTC m=+0.162837928 container attach c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:46:53 np0005531754 exciting_ardinghelli[258021]: 167 167
Nov 22 00:46:53 np0005531754 systemd[1]: libpod-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope: Deactivated successfully.
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.258107332 +0000 UTC m=+0.164802191 container died c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:46:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-97cfcc02437fb508e0febf07d93fc162efeca3352844c5372d27fc8da873b7e5-merged.mount: Deactivated successfully.
Nov 22 00:46:53 np0005531754 podman[258004]: 2025-11-22 05:46:53.303648523 +0000 UTC m=+0.210343372 container remove c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:46:53 np0005531754 systemd[1]: libpod-conmon-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope: Deactivated successfully.
Nov 22 00:46:53 np0005531754 podman[258046]: 2025-11-22 05:46:53.458722331 +0000 UTC m=+0.041756720 container create 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:46:53 np0005531754 systemd[1]: Started libpod-conmon-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope.
Nov 22 00:46:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:46:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:53 np0005531754 podman[258046]: 2025-11-22 05:46:53.443144103 +0000 UTC m=+0.026178512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:46:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:46:53 np0005531754 podman[258046]: 2025-11-22 05:46:53.553588235 +0000 UTC m=+0.136622674 container init 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:46:53 np0005531754 podman[258046]: 2025-11-22 05:46:53.565035702 +0000 UTC m=+0.148070121 container start 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:46:53 np0005531754 podman[258046]: 2025-11-22 05:46:53.569418549 +0000 UTC m=+0.152452968 container attach 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:46:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:54 np0005531754 nifty_bell[258062]: {
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_id": 1,
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "type": "bluestore"
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    },
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_id": 2,
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "type": "bluestore"
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    },
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_id": 0,
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:        "type": "bluestore"
Nov 22 00:46:54 np0005531754 nifty_bell[258062]:    }
Nov 22 00:46:54 np0005531754 nifty_bell[258062]: }
Nov 22 00:46:54 np0005531754 systemd[1]: libpod-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Deactivated successfully.
Nov 22 00:46:54 np0005531754 systemd[1]: libpod-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Consumed 1.084s CPU time.
Nov 22 00:46:54 np0005531754 podman[258046]: 2025-11-22 05:46:54.636731931 +0000 UTC m=+1.219766320 container died 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:46:54 np0005531754 systemd[1]: var-lib-containers-storage-overlay-745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc-merged.mount: Deactivated successfully.
Nov 22 00:46:54 np0005531754 podman[258046]: 2025-11-22 05:46:54.716735407 +0000 UTC m=+1.299769806 container remove 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:46:54 np0005531754 systemd[1]: libpod-conmon-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Deactivated successfully.
Nov 22 00:46:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:46:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:46:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:54 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5c3fc611-d1ef-45fa-ac2d-f75300d8187f does not exist
Nov 22 00:46:54 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev cd4c6266-9274-4a10-813d-4f78ffe25a6d does not exist
Nov 22 00:46:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:46:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:46:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:46:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:09 np0005531754 podman[258159]: 2025-11-22 05:47:09.278232321 +0000 UTC m=+0.127973542 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:47:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:11 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.846 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:47:11 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.847 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:47:11 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.849 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:47:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.712 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.713 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.735 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.736 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.737 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.755 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.756 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.756 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.758 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.758 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.761 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.786 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.787 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.787 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.788 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:47:15 np0005531754 nova_compute[255660]: 2025-11-22 05:47:15.788 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:47:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:47:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773929974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.247 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.394 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.480 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.480 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.496 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:47:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 00:47:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:47:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275177033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.955 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.963 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.979 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.982 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:47:16 np0005531754 nova_compute[255660]: 2025-11-22 05:47:16.982 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:47:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 00:47:20 np0005531754 podman[258230]: 2025-11-22 05:47:20.227389355 +0000 UTC m=+0.073239145 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 00:47:20 np0005531754 podman[258231]: 2025-11-22 05:47:20.249542299 +0000 UTC m=+0.093230552 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 00:47:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:47:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:47:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:47:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 00:47:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Nov 22 00:47:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 22 00:47:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:47:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:47:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:47:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:40 np0005531754 podman[258271]: 2025-11-22 05:47:40.253988792 +0000 UTC m=+0.115963661 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:47:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:47:43
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:47:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:47:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:47:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:47:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:47:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:47:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:51 np0005531754 podman[258301]: 2025-11-22 05:47:51.203753633 +0000 UTC m=+0.058503560 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 00:47:51 np0005531754 podman[258300]: 2025-11-22 05:47:51.211382308 +0000 UTC m=+0.063694449 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 00:47:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:47:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:47:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:56 np0005531754 podman[258512]: 2025-11-22 05:47:55.998901391 +0000 UTC m=+0.151370291 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:47:56 np0005531754 podman[258512]: 2025-11-22 05:47:56.137536088 +0000 UTC m=+0.290005048 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:47:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:47:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:47:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:47:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 0c638df0-d6f5-4377-b1f9-1be760fd5958 does not exist
Nov 22 00:47:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d4c52154-f7a3-48ff-bf53-40205bd606a9 does not exist
Nov 22 00:47:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 0e0df70b-a158-439b-ace5-e8ce98437250 does not exist
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.631213699 +0000 UTC m=+0.072824024 container create 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 00:47:58 np0005531754 systemd[1]: Started libpod-conmon-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope.
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.586997934 +0000 UTC m=+0.028608329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:47:58 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.725208541 +0000 UTC m=+0.166818926 container init 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.738436745 +0000 UTC m=+0.180047060 container start 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:47:58 np0005531754 adoring_gould[258961]: 167 167
Nov 22 00:47:58 np0005531754 systemd[1]: libpod-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope: Deactivated successfully.
Nov 22 00:47:58 np0005531754 conmon[258961]: conmon 35fd619446046537ec7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope/container/memory.events
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.754434633 +0000 UTC m=+0.196045028 container attach 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.75615184 +0000 UTC m=+0.197762195 container died 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:47:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:47:58 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8fc4f8d18c6896a1f92095e41d2b2729b97fc1a389fcb79af4853dca8eee8a0d-merged.mount: Deactivated successfully.
Nov 22 00:47:58 np0005531754 podman[258945]: 2025-11-22 05:47:58.838391335 +0000 UTC m=+0.280001660 container remove 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:47:58 np0005531754 systemd[1]: libpod-conmon-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope: Deactivated successfully.
Nov 22 00:47:59 np0005531754 podman[258986]: 2025-11-22 05:47:59.086814347 +0000 UTC m=+0.066127314 container create f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:47:59 np0005531754 systemd[1]: Started libpod-conmon-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope.
Nov 22 00:47:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:47:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:47:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:47:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:47:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:47:59 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:47:59 np0005531754 podman[258986]: 2025-11-22 05:47:59.057899721 +0000 UTC m=+0.037212728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:47:59 np0005531754 podman[258986]: 2025-11-22 05:47:59.166388611 +0000 UTC m=+0.145701598 container init f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:47:59 np0005531754 podman[258986]: 2025-11-22 05:47:59.172873795 +0000 UTC m=+0.152186762 container start f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:47:59 np0005531754 podman[258986]: 2025-11-22 05:47:59.186566832 +0000 UTC m=+0.165879819 container attach f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:48:00 np0005531754 stupefied_mclaren[259002]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:48:00 np0005531754 stupefied_mclaren[259002]: --> relative data size: 1.0
Nov 22 00:48:00 np0005531754 stupefied_mclaren[259002]: --> All data devices are unavailable
Nov 22 00:48:00 np0005531754 systemd[1]: libpod-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope: Deactivated successfully.
Nov 22 00:48:00 np0005531754 podman[258986]: 2025-11-22 05:48:00.222137292 +0000 UTC m=+1.201450269 container died f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:48:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d-merged.mount: Deactivated successfully.
Nov 22 00:48:00 np0005531754 podman[258986]: 2025-11-22 05:48:00.293028033 +0000 UTC m=+1.272341000 container remove f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:48:00 np0005531754 systemd[1]: libpod-conmon-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope: Deactivated successfully.
Nov 22 00:48:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.014157721 +0000 UTC m=+0.072350671 container create af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:00.979967584 +0000 UTC m=+0.038160544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:48:01 np0005531754 systemd[1]: Started libpod-conmon-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope.
Nov 22 00:48:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.147864007 +0000 UTC m=+0.206056947 container init af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.155634455 +0000 UTC m=+0.213827365 container start af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:48:01 np0005531754 blissful_hofstadter[259200]: 167 167
Nov 22 00:48:01 np0005531754 systemd[1]: libpod-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope: Deactivated successfully.
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.300716645 +0000 UTC m=+0.358909655 container attach af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.30200178 +0000 UTC m=+0.360194700 container died af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:48:01 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ce3a9a5995715dd1203225ec8a397d2284c5ec84e4be2597d0c43d80e5769898-merged.mount: Deactivated successfully.
Nov 22 00:48:01 np0005531754 podman[259184]: 2025-11-22 05:48:01.502251199 +0000 UTC m=+0.560444099 container remove af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:48:01 np0005531754 systemd[1]: libpod-conmon-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope: Deactivated successfully.
Nov 22 00:48:01 np0005531754 podman[259224]: 2025-11-22 05:48:01.686407708 +0000 UTC m=+0.048018509 container create 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:48:01 np0005531754 systemd[1]: Started libpod-conmon-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope.
Nov 22 00:48:01 np0005531754 podman[259224]: 2025-11-22 05:48:01.666413432 +0000 UTC m=+0.028024273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:48:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:48:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:01 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:01 np0005531754 podman[259224]: 2025-11-22 05:48:01.782755692 +0000 UTC m=+0.144366583 container init 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:48:01 np0005531754 podman[259224]: 2025-11-22 05:48:01.793458429 +0000 UTC m=+0.155069230 container start 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:48:01 np0005531754 podman[259224]: 2025-11-22 05:48:01.797111117 +0000 UTC m=+0.158721938 container attach 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:48:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:02 np0005531754 festive_tesla[259240]: {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    "0": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "devices": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "/dev/loop3"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            ],
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_name": "ceph_lv0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_size": "21470642176",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "name": "ceph_lv0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "tags": {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_name": "ceph",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.crush_device_class": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.encrypted": "0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_id": "0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.vdo": "0"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            },
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "vg_name": "ceph_vg0"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        }
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    ],
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    "1": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "devices": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "/dev/loop4"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            ],
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_name": "ceph_lv1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_size": "21470642176",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "name": "ceph_lv1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "tags": {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_name": "ceph",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.crush_device_class": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.encrypted": "0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_id": "1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.vdo": "0"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            },
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "vg_name": "ceph_vg1"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        }
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    ],
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    "2": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "devices": [
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "/dev/loop5"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            ],
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_name": "ceph_lv2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_size": "21470642176",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "name": "ceph_lv2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "tags": {
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.cluster_name": "ceph",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.crush_device_class": "",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.encrypted": "0",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osd_id": "2",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:                "ceph.vdo": "0"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            },
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "type": "block",
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:            "vg_name": "ceph_vg2"
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:        }
Nov 22 00:48:02 np0005531754 festive_tesla[259240]:    ]
Nov 22 00:48:02 np0005531754 festive_tesla[259240]: }
Nov 22 00:48:02 np0005531754 systemd[1]: libpod-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope: Deactivated successfully.
Nov 22 00:48:02 np0005531754 conmon[259240]: conmon 9bc5cd46174ef2210c99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope/container/memory.events
Nov 22 00:48:02 np0005531754 podman[259224]: 2025-11-22 05:48:02.603548362 +0000 UTC m=+0.965159183 container died 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:48:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f-merged.mount: Deactivated successfully.
Nov 22 00:48:02 np0005531754 podman[259224]: 2025-11-22 05:48:02.765442194 +0000 UTC m=+1.127053005 container remove 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:48:02 np0005531754 systemd[1]: libpod-conmon-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope: Deactivated successfully.
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.526442081 +0000 UTC m=+0.041514664 container create 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:48:03 np0005531754 systemd[1]: Started libpod-conmon-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope.
Nov 22 00:48:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.506829945 +0000 UTC m=+0.021902538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.604610847 +0000 UTC m=+0.119683450 container init 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.612805497 +0000 UTC m=+0.127878070 container start 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 00:48:03 np0005531754 focused_panini[259420]: 167 167
Nov 22 00:48:03 np0005531754 systemd[1]: libpod-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope: Deactivated successfully.
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.630942763 +0000 UTC m=+0.146015376 container attach 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.631298663 +0000 UTC m=+0.146371236 container died 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 00:48:03 np0005531754 systemd[1]: var-lib-containers-storage-overlay-67d2372c86aba2530702ead2654d0bbb55349267c4651fcb294fa20bc5d5aede-merged.mount: Deactivated successfully.
Nov 22 00:48:03 np0005531754 podman[259404]: 2025-11-22 05:48:03.717451283 +0000 UTC m=+0.232523856 container remove 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:48:03 np0005531754 systemd[1]: libpod-conmon-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope: Deactivated successfully.
Nov 22 00:48:03 np0005531754 podman[259444]: 2025-11-22 05:48:03.961070936 +0000 UTC m=+0.098659556 container create 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:48:03 np0005531754 podman[259444]: 2025-11-22 05:48:03.891130701 +0000 UTC m=+0.028719371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:48:04 np0005531754 systemd[1]: Started libpod-conmon-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope.
Nov 22 00:48:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:48:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:04 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:48:04 np0005531754 podman[259444]: 2025-11-22 05:48:04.082592945 +0000 UTC m=+0.220181525 container init 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:48:04 np0005531754 podman[259444]: 2025-11-22 05:48:04.088628267 +0000 UTC m=+0.226216847 container start 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:48:04 np0005531754 podman[259444]: 2025-11-22 05:48:04.110103262 +0000 UTC m=+0.247691862 container attach 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 00:48:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]: {
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_id": 1,
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "type": "bluestore"
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    },
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_id": 2,
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "type": "bluestore"
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    },
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_id": 0,
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:        "type": "bluestore"
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]:    }
Nov 22 00:48:05 np0005531754 suspicious_maxwell[259461]: }
Nov 22 00:48:05 np0005531754 systemd[1]: libpod-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope: Deactivated successfully.
Nov 22 00:48:05 np0005531754 podman[259444]: 2025-11-22 05:48:05.079010735 +0000 UTC m=+1.216599325 container died 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 00:48:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc-merged.mount: Deactivated successfully.
Nov 22 00:48:05 np0005531754 podman[259444]: 2025-11-22 05:48:05.179337675 +0000 UTC m=+1.316926265 container remove 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 00:48:05 np0005531754 systemd[1]: libpod-conmon-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope: Deactivated successfully.
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:48:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 191fa7df-c4dd-4616-9a28-f8fc520ae19d does not exist
Nov 22 00:48:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev feced4bd-3b03-44a5-8d37-d8c20dbd9ad7 does not exist
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:48:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:48:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:11 np0005531754 podman[259555]: 2025-11-22 05:48:11.3541215 +0000 UTC m=+0.196167969 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 00:48:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:16 np0005531754 nova_compute[255660]: 2025-11-22 05:48:16.984 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:16 np0005531754 nova_compute[255660]: 2025-11-22 05:48:16.985 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:16 np0005531754 nova_compute[255660]: 2025-11-22 05:48:16.985 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:48:16 np0005531754 nova_compute[255660]: 2025-11-22 05:48:16.986 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.032 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.034 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:48:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:48:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411300330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.502 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.652 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.653 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.654 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.654 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.715 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.715 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:48:17 np0005531754 nova_compute[255660]: 2025-11-22 05:48:17.729 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:48:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:48:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199327227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:48:18 np0005531754 nova_compute[255660]: 2025-11-22 05:48:18.165 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:48:18 np0005531754 nova_compute[255660]: 2025-11-22 05:48:18.174 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:48:18 np0005531754 nova_compute[255660]: 2025-11-22 05:48:18.193 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:48:18 np0005531754 nova_compute[255660]: 2025-11-22 05:48:18.196 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:48:18 np0005531754 nova_compute[255660]: 2025-11-22 05:48:18.197 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:48:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:22 np0005531754 podman[259626]: 2025-11-22 05:48:22.230738739 +0000 UTC m=+0.083724110 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 00:48:22 np0005531754 podman[259627]: 2025-11-22 05:48:22.243343646 +0000 UTC m=+0.096779440 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 22 00:48:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:48:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:48:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:48:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:42 np0005531754 podman[259664]: 2025-11-22 05:48:42.286832096 +0000 UTC m=+0.134509289 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:48:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:48:43
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'volumes', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:48:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:48:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:48:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:48:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:48:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:48:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:48:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:48:53 np0005531754 podman[259691]: 2025-11-22 05:48:53.235707171 +0000 UTC m=+0.084303337 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 00:48:53 np0005531754 podman[259690]: 2025-11-22 05:48:53.252613843 +0000 UTC m=+0.104642831 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:48:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:48:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:48:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8cea6c5e-b231-498e-9f64-8288511f3b26 does not exist
Nov 22 00:49:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d7e617b5-befa-4e6b-9d7f-7dbbf7a402a7 does not exist
Nov 22 00:49:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev eaf20724-5698-472e-966d-0be7890a2d58 does not exist
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.136862079 +0000 UTC m=+0.059770970 container create 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:49:07 np0005531754 systemd[1]: Started libpod-conmon-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope.
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.115189669 +0000 UTC m=+0.038098600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:07 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.23892912 +0000 UTC m=+0.161838041 container init 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.247377966 +0000 UTC m=+0.170287007 container start 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.251057785 +0000 UTC m=+0.173966716 container attach 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:49:07 np0005531754 pensive_lederberg[260020]: 167 167
Nov 22 00:49:07 np0005531754 systemd[1]: libpod-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope: Deactivated successfully.
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.255511614 +0000 UTC m=+0.178420495 container died 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:49:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-23103f12f0b6d7e63ad01a432467eb457e4fddfb3147a22a3192167364101779-merged.mount: Deactivated successfully.
Nov 22 00:49:07 np0005531754 podman[260003]: 2025-11-22 05:49:07.310444433 +0000 UTC m=+0.233353324 container remove 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:49:07 np0005531754 systemd[1]: libpod-conmon-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope: Deactivated successfully.
Nov 22 00:49:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:07 np0005531754 podman[260044]: 2025-11-22 05:49:07.548637146 +0000 UTC m=+0.071370161 container create de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:49:07 np0005531754 systemd[1]: Started libpod-conmon-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope.
Nov 22 00:49:07 np0005531754 podman[260044]: 2025-11-22 05:49:07.522398133 +0000 UTC m=+0.045131188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:07 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:07 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:07 np0005531754 podman[260044]: 2025-11-22 05:49:07.655022622 +0000 UTC m=+0.177755677 container init de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 00:49:07 np0005531754 podman[260044]: 2025-11-22 05:49:07.669180761 +0000 UTC m=+0.191913766 container start de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:49:07 np0005531754 podman[260044]: 2025-11-22 05:49:07.674508063 +0000 UTC m=+0.197241048 container attach de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:49:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:08 np0005531754 xenodochial_hellman[260061]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:49:08 np0005531754 xenodochial_hellman[260061]: --> relative data size: 1.0
Nov 22 00:49:08 np0005531754 xenodochial_hellman[260061]: --> All data devices are unavailable
Nov 22 00:49:08 np0005531754 systemd[1]: libpod-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Deactivated successfully.
Nov 22 00:49:08 np0005531754 systemd[1]: libpod-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Consumed 1.013s CPU time.
Nov 22 00:49:08 np0005531754 podman[260044]: 2025-11-22 05:49:08.738183451 +0000 UTC m=+1.260916436 container died de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:49:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b-merged.mount: Deactivated successfully.
Nov 22 00:49:08 np0005531754 podman[260044]: 2025-11-22 05:49:08.797115947 +0000 UTC m=+1.319848932 container remove de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:49:08 np0005531754 systemd[1]: libpod-conmon-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Deactivated successfully.
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.580651069 +0000 UTC m=+0.047898182 container create 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:49:09 np0005531754 systemd[1]: Started libpod-conmon-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope.
Nov 22 00:49:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.560786718 +0000 UTC m=+0.028033811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.673224766 +0000 UTC m=+0.140471859 container init 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.688836494 +0000 UTC m=+0.156083607 container start 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.693212621 +0000 UTC m=+0.160459744 container attach 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:49:09 np0005531754 nifty_ishizaka[260259]: 167 167
Nov 22 00:49:09 np0005531754 systemd[1]: libpod-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope: Deactivated successfully.
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.696371035 +0000 UTC m=+0.163618138 container died 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 00:49:09 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0052346d5411a7ac5f40ce4c1fa7fede3c315c86a3eb5c459274ae82d5fd37df-merged.mount: Deactivated successfully.
Nov 22 00:49:09 np0005531754 podman[260243]: 2025-11-22 05:49:09.737411273 +0000 UTC m=+0.204658346 container remove 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:49:09 np0005531754 systemd[1]: libpod-conmon-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope: Deactivated successfully.
Nov 22 00:49:09 np0005531754 podman[260281]: 2025-11-22 05:49:09.992780006 +0000 UTC m=+0.066887451 container create 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:49:10 np0005531754 systemd[1]: Started libpod-conmon-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope.
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:09.968777344 +0000 UTC m=+0.042884829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:10 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:10 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:10.119174707 +0000 UTC m=+0.193282222 container init 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:10.127477149 +0000 UTC m=+0.201584634 container start 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:10.132098583 +0000 UTC m=+0.206206048 container attach 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:49:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]: {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    "0": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "devices": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "/dev/loop3"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            ],
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_name": "ceph_lv0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_size": "21470642176",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "name": "ceph_lv0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "tags": {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_name": "ceph",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.crush_device_class": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.encrypted": "0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_id": "0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.vdo": "0"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            },
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "vg_name": "ceph_vg0"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        }
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    ],
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    "1": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "devices": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "/dev/loop4"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            ],
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_name": "ceph_lv1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_size": "21470642176",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "name": "ceph_lv1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "tags": {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_name": "ceph",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.crush_device_class": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.encrypted": "0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_id": "1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.vdo": "0"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            },
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "vg_name": "ceph_vg1"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        }
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    ],
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    "2": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "devices": [
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "/dev/loop5"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            ],
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_name": "ceph_lv2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_size": "21470642176",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "name": "ceph_lv2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "tags": {
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.cluster_name": "ceph",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.crush_device_class": "",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.encrypted": "0",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osd_id": "2",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:                "ceph.vdo": "0"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            },
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "type": "block",
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:            "vg_name": "ceph_vg2"
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:        }
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]:    ]
Nov 22 00:49:10 np0005531754 exciting_darwin[260298]: }
Nov 22 00:49:10 np0005531754 systemd[1]: libpod-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope: Deactivated successfully.
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:10.878941564 +0000 UTC m=+0.953049009 container died 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:49:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef-merged.mount: Deactivated successfully.
Nov 22 00:49:10 np0005531754 podman[260281]: 2025-11-22 05:49:10.941938279 +0000 UTC m=+1.016045764 container remove 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:49:10 np0005531754 systemd[1]: libpod-conmon-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope: Deactivated successfully.
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.709848903 +0000 UTC m=+0.059360768 container create 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 22 00:49:11 np0005531754 systemd[1]: Started libpod-conmon-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope.
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.682279366 +0000 UTC m=+0.031791301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.804085344 +0000 UTC m=+0.153597289 container init 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.811342269 +0000 UTC m=+0.160854114 container start 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.815355366 +0000 UTC m=+0.164867241 container attach 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:49:11 np0005531754 systemd[1]: libpod-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope: Deactivated successfully.
Nov 22 00:49:11 np0005531754 gallant_kilby[260478]: 167 167
Nov 22 00:49:11 np0005531754 conmon[260478]: conmon 82cec222b6ed94d80e74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope/container/memory.events
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.821204083 +0000 UTC m=+0.170715958 container died 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:49:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e758e9e3675e82d313ebdba658959e3f47334fdf0ab23e64a6c543bf882f292a-merged.mount: Deactivated successfully.
Nov 22 00:49:11 np0005531754 podman[260462]: 2025-11-22 05:49:11.864049969 +0000 UTC m=+0.213561804 container remove 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 00:49:11 np0005531754 systemd[1]: libpod-conmon-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope: Deactivated successfully.
Nov 22 00:49:12 np0005531754 podman[260502]: 2025-11-22 05:49:12.062299083 +0000 UTC m=+0.049879126 container create f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:49:12 np0005531754 systemd[1]: Started libpod-conmon-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope.
Nov 22 00:49:12 np0005531754 podman[260502]: 2025-11-22 05:49:12.0401283 +0000 UTC m=+0.027708383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:49:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:49:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:49:12 np0005531754 podman[260502]: 2025-11-22 05:49:12.169241934 +0000 UTC m=+0.156822037 container init f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:49:12 np0005531754 podman[260502]: 2025-11-22 05:49:12.182749015 +0000 UTC m=+0.170329098 container start f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:49:12 np0005531754 podman[260502]: 2025-11-22 05:49:12.186795984 +0000 UTC m=+0.174376067 container attach f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:49:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]: {
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_id": 1,
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "type": "bluestore"
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    },
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_id": 2,
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "type": "bluestore"
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    },
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_id": 0,
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:        "type": "bluestore"
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]:    }
Nov 22 00:49:13 np0005531754 sweet_hoover[260518]: }
Nov 22 00:49:13 np0005531754 systemd[1]: libpod-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Deactivated successfully.
Nov 22 00:49:13 np0005531754 systemd[1]: libpod-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Consumed 1.038s CPU time.
Nov 22 00:49:13 np0005531754 podman[260502]: 2025-11-22 05:49:13.211090987 +0000 UTC m=+1.198671040 container died f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:49:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97-merged.mount: Deactivated successfully.
Nov 22 00:49:13 np0005531754 podman[260546]: 2025-11-22 05:49:13.257622892 +0000 UTC m=+0.109264215 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:49:13 np0005531754 podman[260502]: 2025-11-22 05:49:13.275699685 +0000 UTC m=+1.263279728 container remove f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:49:13 np0005531754 systemd[1]: libpod-conmon-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Deactivated successfully.
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.337 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e56ba9be-24c6-4266-9cbd-5c6ac622dde3 does not exist
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 78235278-85ea-4bd0-9cb8-1de7f96089aa does not exist
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.355 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.356 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.376 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.377 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.378 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.378 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.379 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877995662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:49:13 np0005531754 nova_compute[255660]: 2025-11-22 05:49:13.833 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.009 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.183 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.184 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.206 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:49:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:49:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29265632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.644 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.651 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.709 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.712 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:49:14 np0005531754 nova_compute[255660]: 2025-11-22 05:49:14.713 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.486 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.487 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.487 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.488 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.488 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:15 np0005531754 nova_compute[255660]: 2025-11-22 05:49:15.489 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:49:16 np0005531754 nova_compute[255660]: 2025-11-22 05:49:16.127 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:16 np0005531754 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:49:16 np0005531754 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:49:16 np0005531754 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:49:16 np0005531754 nova_compute[255660]: 2025-11-22 05:49:16.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:49:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:24 np0005531754 podman[260684]: 2025-11-22 05:49:24.211803626 +0000 UTC m=+0.066284164 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:49:24 np0005531754 podman[260685]: 2025-11-22 05:49:24.251344563 +0000 UTC m=+0.097505319 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 00:49:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:49:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:49:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:49:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:49:43
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:49:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:49:44 np0005531754 podman[260723]: 2025-11-22 05:49:44.265613505 +0000 UTC m=+0.122881019 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 00:49:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:49:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:49:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:49:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:49:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:49:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:49:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:55 np0005531754 podman[260751]: 2025-11-22 05:49:55.211165159 +0000 UTC m=+0.062287587 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:49:55 np0005531754 podman[260752]: 2025-11-22 05:49:55.242371355 +0000 UTC m=+0.080138156 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 22 00:49:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:49:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.776003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598776039, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 3459539, "memory_usage": 3516448, "flush_reason": "Manual Compaction"}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598815608, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3394645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16347, "largest_seqno": 18395, "table_properties": {"data_size": 3385323, "index_size": 5880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18399, "raw_average_key_size": 19, "raw_value_size": 3366834, "raw_average_value_size": 3628, "num_data_blocks": 266, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790368, "oldest_key_time": 1763790368, "file_creation_time": 1763790598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 39702 microseconds, and 13770 cpu microseconds.
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.815698) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3394645 bytes OK
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.815729) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830504) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830566) EVENT_LOG_v1 {"time_micros": 1763790598830555, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3450969, prev total WAL file size 3450969, number of live WAL files 2.
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.832113) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3315KB)], [38(7503KB)]
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598832228, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11078353, "oldest_snapshot_seqno": -1}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4417 keys, 9313095 bytes, temperature: kUnknown
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598917226, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9313095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9279857, "index_size": 21096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106807, "raw_average_key_size": 24, "raw_value_size": 9196351, "raw_average_value_size": 2082, "num_data_blocks": 895, "num_entries": 4417, "num_filter_entries": 4417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.917560) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9313095 bytes
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.919711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.2 rd, 109.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4931, records dropped: 514 output_compression: NoCompression
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.919762) EVENT_LOG_v1 {"time_micros": 1763790598919746, "job": 18, "event": "compaction_finished", "compaction_time_micros": 85073, "compaction_time_cpu_micros": 36793, "output_level": 6, "num_output_files": 1, "total_output_size": 9313095, "num_input_records": 4931, "num_output_records": 4417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598921013, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598923717, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.831965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:49:58 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:50:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:50:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591693286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.617 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.802 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.804 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.805 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.805 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.898 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.899 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:50:11 np0005531754 nova_compute[255660]: 2025-11-22 05:50:11.921 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:50:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:50:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242325174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.389 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.397 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.415 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.417 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.418 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.419 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.419 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.435 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.437 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.437 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 00:50:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:12 np0005531754 nova_compute[255660]: 2025-11-22 05:50:12.448 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:13 np0005531754 nova_compute[255660]: 2025-11-22 05:50:13.459 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ae4ab058-ec59-4925-b3aa-49f1eb823cd9 does not exist
Nov 22 00:50:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev a3a29f18-4d70-42db-8341-e81d57f081d6 does not exist
Nov 22 00:50:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4a39b3e7-d555-4897-8e4d-da4e8525759c does not exist
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:14 np0005531754 podman[260989]: 2025-11-22 05:50:14.752581955 +0000 UTC m=+0.132690881 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:50:15 np0005531754 nova_compute[255660]: 2025-11-22 05:50:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:15 np0005531754 nova_compute[255660]: 2025-11-22 05:50:15.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.357778946 +0000 UTC m=+0.066193411 container create f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 00:50:15 np0005531754 systemd[1]: Started libpod-conmon-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope.
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.33437648 +0000 UTC m=+0.042790955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.459696883 +0000 UTC m=+0.168111408 container init f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.471099228 +0000 UTC m=+0.179513693 container start f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.47526887 +0000 UTC m=+0.183683385 container attach f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:50:15 np0005531754 interesting_rosalind[261145]: 167 167
Nov 22 00:50:15 np0005531754 systemd[1]: libpod-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope: Deactivated successfully.
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.480083409 +0000 UTC m=+0.188497874 container died f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:50:15 np0005531754 systemd[1]: var-lib-containers-storage-overlay-81a657a6f936d58d50111a864d8e613446753d4324b3dccea14314c14a55c7c0-merged.mount: Deactivated successfully.
Nov 22 00:50:15 np0005531754 podman[261129]: 2025-11-22 05:50:15.544239035 +0000 UTC m=+0.252653510 container remove f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:50:15 np0005531754 systemd[1]: libpod-conmon-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope: Deactivated successfully.
Nov 22 00:50:15 np0005531754 podman[261168]: 2025-11-22 05:50:15.783316401 +0000 UTC m=+0.060193981 container create a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:50:15 np0005531754 systemd[1]: Started libpod-conmon-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope.
Nov 22 00:50:15 np0005531754 podman[261168]: 2025-11-22 05:50:15.754068899 +0000 UTC m=+0.030946479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:15 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:15 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:15 np0005531754 podman[261168]: 2025-11-22 05:50:15.892705378 +0000 UTC m=+0.169582988 container init a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:50:15 np0005531754 podman[261168]: 2025-11-22 05:50:15.906943198 +0000 UTC m=+0.183820778 container start a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:50:15 np0005531754 podman[261168]: 2025-11-22 05:50:15.910758421 +0000 UTC m=+0.187636051 container attach a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:16 np0005531754 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:50:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:17 np0005531754 trusting_kilby[261184]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:50:17 np0005531754 trusting_kilby[261184]: --> relative data size: 1.0
Nov 22 00:50:17 np0005531754 trusting_kilby[261184]: --> All data devices are unavailable
Nov 22 00:50:17 np0005531754 systemd[1]: libpod-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Deactivated successfully.
Nov 22 00:50:17 np0005531754 podman[261168]: 2025-11-22 05:50:17.096704449 +0000 UTC m=+1.373582019 container died a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:50:17 np0005531754 systemd[1]: libpod-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Consumed 1.154s CPU time.
Nov 22 00:50:17 np0005531754 nova_compute[255660]: 2025-11-22 05:50:17.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:17 np0005531754 nova_compute[255660]: 2025-11-22 05:50:17.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:50:17 np0005531754 systemd[1]: var-lib-containers-storage-overlay-11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b-merged.mount: Deactivated successfully.
Nov 22 00:50:17 np0005531754 podman[261168]: 2025-11-22 05:50:17.180707956 +0000 UTC m=+1.457585526 container remove a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:50:17 np0005531754 systemd[1]: libpod-conmon-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Deactivated successfully.
Nov 22 00:50:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:17 np0005531754 podman[261366]: 2025-11-22 05:50:17.999174424 +0000 UTC m=+0.048004936 container create 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:50:18 np0005531754 systemd[1]: Started libpod-conmon-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope.
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:17.972850309 +0000 UTC m=+0.021680911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:18.093128922 +0000 UTC m=+0.141959454 container init 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:18.103655374 +0000 UTC m=+0.152485886 container start 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:50:18 np0005531754 systemd[1]: libpod-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope: Deactivated successfully.
Nov 22 00:50:18 np0005531754 stupefied_elion[261383]: 167 167
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:18.109572464 +0000 UTC m=+0.158402986 container attach 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:50:18 np0005531754 conmon[261383]: conmon 3c0a7c337e2763425150 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope/container/memory.events
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:18.110231311 +0000 UTC m=+0.159061833 container died 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:50:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a61d16dbe76589d24fc3c642b0ccfd59c37222096a6dc19a6352a5982c35c9db-merged.mount: Deactivated successfully.
Nov 22 00:50:18 np0005531754 podman[261366]: 2025-11-22 05:50:18.151316253 +0000 UTC m=+0.200146765 container remove 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:50:18 np0005531754 systemd[1]: libpod-conmon-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope: Deactivated successfully.
Nov 22 00:50:18 np0005531754 podman[261408]: 2025-11-22 05:50:18.309137718 +0000 UTC m=+0.036543442 container create 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 00:50:18 np0005531754 systemd[1]: Started libpod-conmon-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope.
Nov 22 00:50:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:18 np0005531754 podman[261408]: 2025-11-22 05:50:18.37033755 +0000 UTC m=+0.097743314 container init 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:50:18 np0005531754 podman[261408]: 2025-11-22 05:50:18.377518963 +0000 UTC m=+0.104924687 container start 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:50:18 np0005531754 podman[261408]: 2025-11-22 05:50:18.381169981 +0000 UTC m=+0.108575785 container attach 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:50:18 np0005531754 podman[261408]: 2025-11-22 05:50:18.292413259 +0000 UTC m=+0.019819003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]: {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    "0": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "devices": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "/dev/loop3"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            ],
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_name": "ceph_lv0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_size": "21470642176",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "name": "ceph_lv0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "tags": {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_name": "ceph",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.crush_device_class": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.encrypted": "0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_id": "0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.vdo": "0"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            },
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "vg_name": "ceph_vg0"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        }
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    ],
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    "1": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "devices": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "/dev/loop4"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            ],
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_name": "ceph_lv1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_size": "21470642176",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "name": "ceph_lv1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "tags": {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_name": "ceph",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.crush_device_class": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.encrypted": "0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_id": "1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.vdo": "0"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            },
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "vg_name": "ceph_vg1"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        }
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    ],
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    "2": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "devices": [
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "/dev/loop5"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            ],
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_name": "ceph_lv2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_size": "21470642176",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "name": "ceph_lv2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "tags": {
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.cluster_name": "ceph",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.crush_device_class": "",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.encrypted": "0",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osd_id": "2",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:                "ceph.vdo": "0"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            },
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "type": "block",
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:            "vg_name": "ceph_vg2"
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:        }
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]:    ]
Nov 22 00:50:19 np0005531754 inspiring_dijkstra[261424]: }
Nov 22 00:50:19 np0005531754 systemd[1]: libpod-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope: Deactivated successfully.
Nov 22 00:50:19 np0005531754 podman[261408]: 2025-11-22 05:50:19.111604379 +0000 UTC m=+0.839010143 container died 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:50:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816-merged.mount: Deactivated successfully.
Nov 22 00:50:19 np0005531754 podman[261408]: 2025-11-22 05:50:19.201909512 +0000 UTC m=+0.929315236 container remove 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 00:50:19 np0005531754 systemd[1]: libpod-conmon-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope: Deactivated successfully.
Nov 22 00:50:19 np0005531754 podman[261589]: 2025-11-22 05:50:19.940200051 +0000 UTC m=+0.058317556 container create 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:50:19 np0005531754 systemd[1]: Started libpod-conmon-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope.
Nov 22 00:50:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:19.911275464 +0000 UTC m=+0.029393019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:20.021289987 +0000 UTC m=+0.139407562 container init 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:20.028144441 +0000 UTC m=+0.146261956 container start 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:20.032035805 +0000 UTC m=+0.150153290 container attach 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:50:20 np0005531754 zealous_montalcini[261605]: 167 167
Nov 22 00:50:20 np0005531754 systemd[1]: libpod-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope: Deactivated successfully.
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:20.033461434 +0000 UTC m=+0.151578949 container died 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:50:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ef0fec6b5597638514c2ec3f48b8437661cf249e4b7c2f6f2ff8add03cafb2c7-merged.mount: Deactivated successfully.
Nov 22 00:50:20 np0005531754 podman[261589]: 2025-11-22 05:50:20.082595132 +0000 UTC m=+0.200712647 container remove 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 00:50:20 np0005531754 systemd[1]: libpod-conmon-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope: Deactivated successfully.
Nov 22 00:50:20 np0005531754 podman[261627]: 2025-11-22 05:50:20.26105679 +0000 UTC m=+0.057077083 container create 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:50:20 np0005531754 systemd[1]: Started libpod-conmon-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope.
Nov 22 00:50:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:50:20 np0005531754 podman[261627]: 2025-11-22 05:50:20.241836544 +0000 UTC m=+0.037856877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:50:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:50:20 np0005531754 podman[261627]: 2025-11-22 05:50:20.353460539 +0000 UTC m=+0.149480912 container init 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:50:20 np0005531754 podman[261627]: 2025-11-22 05:50:20.366227451 +0000 UTC m=+0.162247764 container start 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:50:20 np0005531754 podman[261627]: 2025-11-22 05:50:20.370391493 +0000 UTC m=+0.166411806 container attach 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:50:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]: {
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_id": 1,
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "type": "bluestore"
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    },
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_id": 2,
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "type": "bluestore"
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    },
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_id": 0,
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:        "type": "bluestore"
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]:    }
Nov 22 00:50:21 np0005531754 wizardly_colden[261643]: }
Nov 22 00:50:21 np0005531754 systemd[1]: libpod-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Deactivated successfully.
Nov 22 00:50:21 np0005531754 systemd[1]: libpod-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Consumed 1.177s CPU time.
Nov 22 00:50:21 np0005531754 podman[261627]: 2025-11-22 05:50:21.534108957 +0000 UTC m=+1.330129270 container died 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:50:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a-merged.mount: Deactivated successfully.
Nov 22 00:50:21 np0005531754 podman[261627]: 2025-11-22 05:50:21.597245111 +0000 UTC m=+1.393265434 container remove 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 00:50:21 np0005531754 systemd[1]: libpod-conmon-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Deactivated successfully.
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:21 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 713fe6e3-0764-4749-8b31-032ee2b65400 does not exist
Nov 22 00:50:21 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 321e67d5-0623-4a7c-82f2-e27924958e2d does not exist
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 22 00:50:21 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.449010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622449053, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 251, "total_data_size": 390795, "memory_usage": 399192, "flush_reason": "Manual Compaction"}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622453815, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 321399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18396, "largest_seqno": 18865, "table_properties": {"data_size": 318799, "index_size": 636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6816, "raw_average_key_size": 19, "raw_value_size": 313439, "raw_average_value_size": 913, "num_data_blocks": 28, "num_entries": 343, "num_filter_entries": 343, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790599, "oldest_key_time": 1763790599, "file_creation_time": 1763790622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4884 microseconds, and 2532 cpu microseconds.
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.453890) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 321399 bytes OK
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.453921) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455575) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455603) EVENT_LOG_v1 {"time_micros": 1763790622455593, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 387986, prev total WAL file size 387986, number of live WAL files 2.
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.456322) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(313KB)], [41(9094KB)]
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622456368, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9634494, "oldest_snapshot_seqno": -1}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4247 keys, 6387801 bytes, temperature: kUnknown
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622513142, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6387801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6360093, "index_size": 16015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103679, "raw_average_key_size": 24, "raw_value_size": 6283886, "raw_average_value_size": 1479, "num_data_blocks": 675, "num_entries": 4247, "num_filter_entries": 4247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.513438) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6387801 bytes
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.514973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 112.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(49.9) write-amplify(19.9) OK, records in: 4760, records dropped: 513 output_compression: NoCompression
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.515050) EVENT_LOG_v1 {"time_micros": 1763790622514993, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56865, "compaction_time_cpu_micros": 33510, "output_level": 6, "num_output_files": 1, "total_output_size": 6387801, "num_input_records": 4760, "num_output_records": 4247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622515360, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622518830, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.456214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.518992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:50:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 22 00:50:22 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 22 00:50:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 22 00:50:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 22 00:50:24 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 22 00:50:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Nov 22 00:50:26 np0005531754 podman[261742]: 2025-11-22 05:50:26.265736583 +0000 UTC m=+0.099635195 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 00:50:26 np0005531754 podman[261741]: 2025-11-22 05:50:26.280766246 +0000 UTC m=+0.114832063 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:50:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 22 00:50:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 22 00:50:26 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 22 00:50:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 16 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 MiB/s wr, 49 op/s
Nov 22 00:50:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.1 MiB/s wr, 42 op/s
Nov 22 00:50:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 22 00:50:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 22 00:50:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 22 00:50:32 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 22 00:50:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 00:50:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Nov 22 00:50:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.5 MiB/s wr, 14 op/s
Nov 22 00:50:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.930 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:50:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.931 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:50:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.931 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:50:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 455 KiB/s wr, 12 op/s
Nov 22 00:50:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 22 00:50:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:50:43
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'vms', 'images']
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:50:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:50:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:45 np0005531754 podman[261783]: 2025-11-22 05:50:45.2932025 +0000 UTC m=+0.141055865 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 00:50:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:50:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:50:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:50:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:50:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:50 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:50.418 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:50:50 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:50.419 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "format": "json"}]: dispatch
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 00:50:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 00:50:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:50:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 00:50:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.266792016669923e-07 of space, bias 4.0, pg target 0.0009920150420003907 quantized to 16 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:50:52 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:50:52 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mscchl(active, since 26m)
Nov 22 00:50:53 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:50:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:50:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:50:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta.tmp'
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta.tmp' to config b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta'
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:50:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta.tmp'
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta.tmp' to config b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta'
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:50:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:50:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 2 op/s
Nov 22 00:50:57 np0005531754 podman[261824]: 2025-11-22 05:50:57.241196156 +0000 UTC m=+0.096828279 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 22 00:50:57 np0005531754 podman[261825]: 2025-11-22 05:50:57.247443123 +0000 UTC m=+0.097220559 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 22 00:50:57 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:50:57.421 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:50:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:50:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "format": "json"}]: dispatch
Nov 22 00:50:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:50:58 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 00:50:58 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:58 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6b1d082-aa60-414d-aa02-6f616d2261dc' of type subvolume
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.114+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6b1d082-aa60-414d-aa02-6f616d2261dc' of type subvolume
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "force": true, "format": "json"}]: dispatch
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc'' moved to trashcan
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d695e4c-80d1-4558-8e35-fa4463b56489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d695e4c-80d1-4558-8e35-fa4463b56489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d695e4c-80d1-4558-8e35-fa4463b56489' of type subvolume
Nov 22 00:50:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.337+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d695e4c-80d1-4558-8e35-fa4463b56489' of type subvolume
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "force": true, "format": "json"}]: dispatch
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489'' moved to trashcan
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:50:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 00:51:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 00:51:00 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mscchl(active, since 26m)
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 4 op/s
Nov 22 00:51:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 00:51:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 00:51:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta.tmp'
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta.tmp' to config b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta'
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 4 op/s
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:06 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:06.552+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15202c81-eb7f-4a9b-b839-74d8d3eac759' of type subvolume
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15202c81-eb7f-4a9b-b839-74d8d3eac759' of type subvolume
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759'' moved to trashcan
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 5 op/s
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta.tmp'
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta.tmp' to config b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta'
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 22 00:51:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 22 00:51:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d9b34a10-7e37-4811-ad95-28431845630c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d9b34a10-7e37-4811-ad95-28431845630c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:10 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:10.307+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd9b34a10-7e37-4811-ad95-28431845630c' of type subvolume
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd9b34a10-7e37-4811-ad95-28431845630c' of type subvolume
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c'' moved to trashcan
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 00:51:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta.tmp'
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta.tmp' to config b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta'
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.754796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672755384, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 258, "total_data_size": 1037078, "memory_usage": 1052384, "flush_reason": "Manual Compaction"}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672774146, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1028635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18866, "largest_seqno": 19627, "table_properties": {"data_size": 1024615, "index_size": 1736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8945, "raw_average_key_size": 18, "raw_value_size": 1016355, "raw_average_value_size": 2130, "num_data_blocks": 79, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790622, "oldest_key_time": 1763790622, "file_creation_time": 1763790672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 19067 microseconds, and 7549 cpu microseconds.
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.774230) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1028635 bytes OK
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.774266) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783919) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783936) EVENT_LOG_v1 {"time_micros": 1763790672783929, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1033058, prev total WAL file size 1033058, number of live WAL files 2.
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.784658) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1004KB)], [44(6238KB)]
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672784744, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7416436, "oldest_snapshot_seqno": -1}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4194 keys, 7292195 bytes, temperature: kUnknown
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672908808, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7292195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7263338, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103953, "raw_average_key_size": 24, "raw_value_size": 7186509, "raw_average_value_size": 1713, "num_data_blocks": 725, "num_entries": 4194, "num_filter_entries": 4194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.909146) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7292195 bytes
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.960679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.7 rd, 58.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.1 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 4724, records dropped: 530 output_compression: NoCompression
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.960744) EVENT_LOG_v1 {"time_micros": 1763790672960720, "job": 22, "event": "compaction_finished", "compaction_time_micros": 124154, "compaction_time_cpu_micros": 32164, "output_level": 6, "num_output_files": 1, "total_output_size": 7292195, "num_input_records": 4724, "num_output_records": 4194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672961260, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672963621, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.784502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:12 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.170 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.329 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.330 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.330 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.331 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.332 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:51:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:51:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:51:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827245382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:51:13 np0005531754 nova_compute[255660]: 2025-11-22 05:51:13.831 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.047 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.049 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.050 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.051 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta.tmp'
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta.tmp' to config b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta'
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.405 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.406 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.647 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 00:51:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.743 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.744 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.763 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.795 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 00:51:14 np0005531754 nova_compute[255660]: 2025-11-22 05:51:14.814 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f30f85a5-1564-4626-84fb-0c570e11fc93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f30f85a5-1564-4626-84fb-0c570e11fc93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:15.179+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30f85a5-1564-4626-84fb-0c570e11fc93' of type subvolume
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30f85a5-1564-4626-84fb-0c570e11fc93' of type subvolume
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93'' moved to trashcan
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:51:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 00:51:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:51:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3703170812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:51:15 np0005531754 nova_compute[255660]: 2025-11-22 05:51:15.337 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:51:15 np0005531754 nova_compute[255660]: 2025-11-22 05:51:15.345 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:51:15 np0005531754 nova_compute[255660]: 2025-11-22 05:51:15.366 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:51:15 np0005531754 nova_compute[255660]: 2025-11-22 05:51:15.368 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:51:15 np0005531754 nova_compute[255660]: 2025-11-22 05:51:15.369 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:51:16 np0005531754 podman[261933]: 2025-11-22 05:51:16.283720716 +0000 UTC m=+0.136174074 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 00:51:16 np0005531754 nova_compute[255660]: 2025-11-22 05:51:16.328 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:16 np0005531754 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:16 np0005531754 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:16 np0005531754 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:51:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 00:51:17 np0005531754 nova_compute[255660]: 2025-11-22 05:51:17.126 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:17 np0005531754 nova_compute[255660]: 2025-11-22 05:51:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:17 np0005531754 nova_compute[255660]: 2025-11-22 05:51:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:51:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 22 00:51:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 22 00:51:17 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2cac985c-91b9-4f35-a81a-295d69c728b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2cac985c-91b9-4f35-a81a-295d69c728b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:17 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:17.955+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cac985c-91b9-4f35-a81a-295d69c728b5' of type subvolume
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cac985c-91b9-4f35-a81a-295d69c728b5' of type subvolume
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5'' moved to trashcan
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:51:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 00:51:18 np0005531754 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:18 np0005531754 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:51:18 np0005531754 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:51:18 np0005531754 nova_compute[255660]: 2025-11-22 05:51:18.153 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:51:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:18.671+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d6b5119-ad0e-4013-9a5e-284fedb56378' of type subvolume
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d6b5119-ad0e-4013-9a5e-284fedb56378' of type subvolume
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "force": true, "format": "json"}]: dispatch
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378'' moved to trashcan
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:51:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 00:51:19 np0005531754 nova_compute[255660]: 2025-11-22 05:51:19.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta.tmp'
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta.tmp' to config b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta'
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "format": "json"}]: dispatch
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 00:51:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 00:51:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta.tmp'
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta.tmp' to config b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta'
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "format": "json"}]: dispatch
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 00:51:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 00:51:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:51:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 23 KiB/s wr, 6 op/s
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:51:22 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 2c18c44f-0b52-46b5-8818-67b7682cec59 does not exist
Nov 22 00:51:22 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8e87ce81-d908-43fa-9ad6-6f5489194787 does not exist
Nov 22 00:51:22 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev fe5690d2-1f69-4916-9f8f-8a16026552e1 does not exist
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:51:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:53:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:53:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:04 np0005531754 rsyslogd[1005]: imjournal: 1455 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 00:53:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 00:53:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c16ce083-3588-49bf-a148-78d666432c7e", "format": "json"}]: dispatch
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c16ce083-3588-49bf-a148-78d666432c7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c16ce083-3588-49bf-a148-78d666432c7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:06 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:06.333+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c16ce083-3588-49bf-a148-78d666432c7e' of type subvolume
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c16ce083-3588-49bf-a148-78d666432c7e' of type subvolume
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c16ce083-3588-49bf-a148-78d666432c7e'' moved to trashcan
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 00:53:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 70 KiB/s wr, 8 op/s
Nov 22 00:53:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta.tmp'
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta.tmp' to config b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta'
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:53:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:53:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 00:53:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:53:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 00:53:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta.tmp'
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta.tmp' to config b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta'
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 42 KiB/s wr, 5 op/s
Nov 22 00:53:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:15 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:15 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:15.476 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:53:15 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:15.477 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905754022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.632 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.840 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.842 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.843 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.843 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.908 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.908 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:53:15 np0005531754 nova_compute[255660]: 2025-11-22 05:53:15.932 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:53:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551900576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:53:16 np0005531754 nova_compute[255660]: 2025-11-22 05:53:16.468 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:53:16 np0005531754 nova_compute[255660]: 2025-11-22 05:53:16.475 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:53:16 np0005531754 nova_compute[255660]: 2025-11-22 05:53:16.494 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:53:16 np0005531754 nova_compute[255660]: 2025-11-22 05:53:16.497 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:53:16 np0005531754 nova_compute[255660]: 2025-11-22 05:53:16.498 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:53:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 00:53:17 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:17.479 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:53:17 np0005531754 nova_compute[255660]: 2025-11-22 05:53:17.499 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:17 np0005531754 nova_compute[255660]: 2025-11-22 05:53:17.499 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:17 np0005531754 nova_compute[255660]: 2025-11-22 05:53:17.500 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:17 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:17.549+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb542e3b-52e7-44e3-82c7-2e32e58f04ae' of type subvolume
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb542e3b-52e7-44e3-82c7-2e32e58f04ae' of type subvolume
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae'' moved to trashcan
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 00:53:17 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 00:53:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:18 np0005531754 nova_compute[255660]: 2025-11-22 05:53:18.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:18 np0005531754 nova_compute[255660]: 2025-11-22 05:53:18.140 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "format": "json"}]: dispatch
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:53:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 48 KiB/s wr, 7 op/s
Nov 22 00:53:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:19 np0005531754 nova_compute[255660]: 2025-11-22 05:53:19.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:19 np0005531754 nova_compute[255660]: 2025-11-22 05:53:19.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:20 np0005531754 nova_compute[255660]: 2025-11-22 05:53:20.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:20 np0005531754 nova_compute[255660]: 2025-11-22 05:53:20.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:53:20 np0005531754 nova_compute[255660]: 2025-11-22 05:53:20.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:53:20 np0005531754 nova_compute[255660]: 2025-11-22 05:53:20.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:53:20 np0005531754 podman[264066]: 2025-11-22 05:53:20.234282971 +0000 UTC m=+0.091925899 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 00:53:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 00:53:21 np0005531754 nova_compute[255660]: 2025-11-22 05:53:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:21 np0005531754 nova_compute[255660]: 2025-11-22 05:53:21.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:53:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 9 op/s
Nov 22 00:53:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta.tmp'
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta.tmp' to config b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta'
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:23 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "target_sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 00:53:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, target_sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:53:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 00:53:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 00:53:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 76eca804-5302-4f65-9ec9-887e332e0764 for path b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101'
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, target_sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101)
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101) -- by 0 seconds
Nov 22 00:53:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:53:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:53:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:53:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 00:53:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:25 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:53:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:53:25 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.mscchl(active, since 28m)
Nov 22 00:53:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:53:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 94 KiB/s wr, 10 op/s
Nov 22 00:53:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:53:27 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:27 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.snap/ab580b6a-b19b-46ad-8a5e-1d8d79733bf6/fe3882fc-5c1d-4277-ae52-5cdb0f8dabd4' to b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/5706d2b5-6899-48ca-9951-7383c7ee3e88'
Nov 22 00:53:27 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 65 KiB/s wr, 8 op/s
Nov 22 00:53:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:29.536+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ea6c199-1cc2-4500-b5d2-1d98c6523e3d' of type subvolume
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ea6c199-1cc2-4500-b5d2-1d98c6523e3d' of type subvolume
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 76eca804-5302-4f65-9ec9-887e332e0764
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101)
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d'' moved to trashcan
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 00:53:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 00:53:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta.tmp'
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta.tmp' to config b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta'
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:31 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:31 np0005531754 podman[264119]: 2025-11-22 05:53:31.233663456 +0000 UTC m=+0.087354275 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 00:53:31 np0005531754 podman[264120]: 2025-11-22 05:53:31.236705899 +0000 UTC m=+0.086622006 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:53:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 67 KiB/s wr, 10 op/s
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:33 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:53:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:53:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:35 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:35.005+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f' of type subvolume
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f' of type subvolume
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f'' moved to trashcan
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 00:53:36 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:53:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:36 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 86 KiB/s wr, 11 op/s
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:36 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:53:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:53:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:53:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta.tmp'
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta.tmp' to config b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta'
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 56 KiB/s wr, 9 op/s
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:40 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 87 KiB/s wr, 10 op/s
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 2ce028e8-de6a-4700-9956-08f41103f333 does not exist
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e2dfa968-e428-40d6-9c91-55e6faa1eb1a does not exist
Nov 22 00:53:40 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev d374078e-d5dc-4dba-a3cb-edbe39f5dd75 does not exist
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:53:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.629713522 +0000 UTC m=+0.107276460 container create fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.549007059 +0000 UTC m=+0.026569967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:53:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:53:41 np0005531754 systemd[1]: Started libpod-conmon-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope.
Nov 22 00:53:41 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.806022854 +0000 UTC m=+0.283585792 container init fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.819032999 +0000 UTC m=+0.296595937 container start fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:53:41 np0005531754 relaxed_galois[264446]: 167 167
Nov 22 00:53:41 np0005531754 systemd[1]: libpod-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope: Deactivated successfully.
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.829268689 +0000 UTC m=+0.306831627 container attach fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:53:41 np0005531754 podman[264430]: 2025-11-22 05:53:41.83151264 +0000 UTC m=+0.309075578 container died fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:53:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-5315e0c596324c1869a68c70fc3dc8a2a276ee51ecc7e4b40dc9d23ccc187a47-merged.mount: Deactivated successfully.
Nov 22 00:53:42 np0005531754 podman[264430]: 2025-11-22 05:53:42.006597528 +0000 UTC m=+0.484160466 container remove fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:53:42 np0005531754 systemd[1]: libpod-conmon-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope: Deactivated successfully.
Nov 22 00:53:42 np0005531754 podman[264471]: 2025-11-22 05:53:42.305087565 +0000 UTC m=+0.112299336 container create 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 00:53:42 np0005531754 podman[264471]: 2025-11-22 05:53:42.225030071 +0000 UTC m=+0.032241902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:42 np0005531754 systemd[1]: Started libpod-conmon-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope.
Nov 22 00:53:42 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:42 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:42 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:42 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:42 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:42 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:42 np0005531754 podman[264471]: 2025-11-22 05:53:42.485302525 +0000 UTC m=+0.292514266 container init 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 00:53:42 np0005531754 podman[264471]: 2025-11-22 05:53:42.49463997 +0000 UTC m=+0.301851751 container start 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:53:42 np0005531754 podman[264471]: 2025-11-22 05:53:42.545668242 +0000 UTC m=+0.352879983 container attach 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:53:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 54 KiB/s wr, 9 op/s
Nov 22 00:53:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:43 np0005531754 pensive_brown[264488]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:53:43 np0005531754 pensive_brown[264488]: --> relative data size: 1.0
Nov 22 00:53:43 np0005531754 pensive_brown[264488]: --> All data devices are unavailable
Nov 22 00:53:43 np0005531754 systemd[1]: libpod-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Deactivated successfully.
Nov 22 00:53:43 np0005531754 systemd[1]: libpod-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Consumed 1.103s CPU time.
Nov 22 00:53:43 np0005531754 podman[264471]: 2025-11-22 05:53:43.646025776 +0000 UTC m=+1.453237557 container died 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:53:43
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:53:43 np0005531754 systemd[1]: var-lib-containers-storage-overlay-934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3-merged.mount: Deactivated successfully.
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:53:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:53:44 np0005531754 podman[264471]: 2025-11-22 05:53:44.184680149 +0000 UTC m=+1.991891930 container remove 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:53:44 np0005531754 systemd[1]: libpod-conmon-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Deactivated successfully.
Nov 22 00:53:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:53:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:53:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:44 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 00:53:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:44 np0005531754 podman[264670]: 2025-11-22 05:53:44.968331799 +0000 UTC m=+0.052915356 container create 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 00:53:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:45 np0005531754 systemd[1]: Started libpod-conmon-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope.
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:44.941236058 +0000 UTC m=+0.025819655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:45.070332322 +0000 UTC m=+0.154915959 container init 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:45.080132109 +0000 UTC m=+0.164715676 container start 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:45.084414746 +0000 UTC m=+0.168998333 container attach 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:53:45 np0005531754 awesome_swartz[264687]: 167 167
Nov 22 00:53:45 np0005531754 systemd[1]: libpod-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope: Deactivated successfully.
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:45.08857061 +0000 UTC m=+0.173154237 container died 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:53:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1deca70c3e43d069f3d73706731a86d10a6873642ea95859b718c2c061ce2e74-merged.mount: Deactivated successfully.
Nov 22 00:53:45 np0005531754 podman[264670]: 2025-11-22 05:53:45.142499542 +0000 UTC m=+0.227083129 container remove 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:53:45 np0005531754 systemd[1]: libpod-conmon-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope: Deactivated successfully.
Nov 22 00:53:45 np0005531754 podman[264710]: 2025-11-22 05:53:45.301115591 +0000 UTC m=+0.024402457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:45 np0005531754 podman[264710]: 2025-11-22 05:53:45.457100229 +0000 UTC m=+0.180387065 container create e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:53:45 np0005531754 systemd[1]: Started libpod-conmon-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope.
Nov 22 00:53:45 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:45 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:45 np0005531754 podman[264710]: 2025-11-22 05:53:45.571262195 +0000 UTC m=+0.294549001 container init e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cedb7eed-2602-4012-a237-08eac957da10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cedb7eed-2602-4012-a237-08eac957da10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:45.580+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cedb7eed-2602-4012-a237-08eac957da10' of type subvolume
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cedb7eed-2602-4012-a237-08eac957da10' of type subvolume
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:45 np0005531754 podman[264710]: 2025-11-22 05:53:45.588795294 +0000 UTC m=+0.312082140 container start e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10'' moved to trashcan
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 00:53:45 np0005531754 podman[264710]: 2025-11-22 05:53:45.597580753 +0000 UTC m=+0.320867569 container attach e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]: {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    "0": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "devices": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "/dev/loop3"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            ],
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_name": "ceph_lv0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_size": "21470642176",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "name": "ceph_lv0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "tags": {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_name": "ceph",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.crush_device_class": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.encrypted": "0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_id": "0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.vdo": "0"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            },
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "vg_name": "ceph_vg0"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        }
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    ],
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    "1": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "devices": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "/dev/loop4"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            ],
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_name": "ceph_lv1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_size": "21470642176",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "name": "ceph_lv1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "tags": {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_name": "ceph",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.crush_device_class": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.encrypted": "0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_id": "1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.vdo": "0"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            },
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "vg_name": "ceph_vg1"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        }
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    ],
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    "2": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "devices": [
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "/dev/loop5"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            ],
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_name": "ceph_lv2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_size": "21470642176",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "name": "ceph_lv2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "tags": {
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.cluster_name": "ceph",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.crush_device_class": "",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.encrypted": "0",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osd_id": "2",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:                "ceph.vdo": "0"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            },
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "type": "block",
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:            "vg_name": "ceph_vg2"
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:        }
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]:    ]
Nov 22 00:53:46 np0005531754 sleepy_blackburn[264726]: }
Nov 22 00:53:46 np0005531754 systemd[1]: libpod-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope: Deactivated successfully.
Nov 22 00:53:46 np0005531754 podman[264710]: 2025-11-22 05:53:46.401241899 +0000 UTC m=+1.124528705 container died e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:53:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6-merged.mount: Deactivated successfully.
Nov 22 00:53:46 np0005531754 podman[264710]: 2025-11-22 05:53:46.489696043 +0000 UTC m=+1.212982839 container remove e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 00:53:46 np0005531754 systemd[1]: libpod-conmon-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope: Deactivated successfully.
Nov 22 00:53:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 00:53:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:53:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:53:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:53:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.251301411 +0000 UTC m=+0.048370861 container create 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:53:47 np0005531754 systemd[1]: Started libpod-conmon-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope.
Nov 22 00:53:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.228850248 +0000 UTC m=+0.025919728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.336075645 +0000 UTC m=+0.133145135 container init 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.343786615 +0000 UTC m=+0.140856065 container start 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:53:47 np0005531754 elegant_shockley[264905]: 167 167
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.34983643 +0000 UTC m=+0.146905900 container attach 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:53:47 np0005531754 systemd[1]: libpod-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope: Deactivated successfully.
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.350244292 +0000 UTC m=+0.147313802 container died 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:53:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-dba0aa61c19abd8c7b1dfc566daa619732413485a7080b0d1f002949c4aafffb-merged.mount: Deactivated successfully.
Nov 22 00:53:47 np0005531754 podman[264888]: 2025-11-22 05:53:47.399043084 +0000 UTC m=+0.196112534 container remove 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 00:53:47 np0005531754 systemd[1]: libpod-conmon-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope: Deactivated successfully.
Nov 22 00:53:47 np0005531754 podman[264928]: 2025-11-22 05:53:47.578217044 +0000 UTC m=+0.048351051 container create 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:53:47 np0005531754 systemd[1]: Started libpod-conmon-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope.
Nov 22 00:53:47 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:53:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:47 np0005531754 podman[264928]: 2025-11-22 05:53:47.552647716 +0000 UTC m=+0.022781813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:53:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:47 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:53:47 np0005531754 podman[264928]: 2025-11-22 05:53:47.661407945 +0000 UTC m=+0.131542002 container init 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:53:47 np0005531754 podman[264928]: 2025-11-22 05:53:47.667984134 +0000 UTC m=+0.138118141 container start 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:53:47 np0005531754 podman[264928]: 2025-11-22 05:53:47.671614513 +0000 UTC m=+0.141748520 container attach 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:48 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:48 np0005531754 clever_spence[264944]: {
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_id": 1,
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "type": "bluestore"
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    },
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_id": 2,
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "type": "bluestore"
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    },
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_id": 0,
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:53:48 np0005531754 clever_spence[264944]:        "type": "bluestore"
Nov 22 00:53:48 np0005531754 clever_spence[264944]:    }
Nov 22 00:53:48 np0005531754 clever_spence[264944]: }
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 63 KiB/s wr, 8 op/s
Nov 22 00:53:48 np0005531754 systemd[1]: libpod-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Deactivated successfully.
Nov 22 00:53:48 np0005531754 systemd[1]: libpod-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Consumed 1.083s CPU time.
Nov 22 00:53:48 np0005531754 podman[264928]: 2025-11-22 05:53:48.754986223 +0000 UTC m=+1.225120250 container died 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:53:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5-merged.mount: Deactivated successfully.
Nov 22 00:53:48 np0005531754 podman[264928]: 2025-11-22 05:53:48.834664508 +0000 UTC m=+1.304798535 container remove 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 00:53:48 np0005531754 systemd[1]: libpod-conmon-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Deactivated successfully.
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:53:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1df39db0-85ea-4b9f-b213-a5a09958f86a does not exist
Nov 22 00:53:48 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3d972d76-7f1f-41df-bdbd-82bb7dd633ba does not exist
Nov 22 00:53:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:49 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:53:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 8 op/s
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta.tmp'
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta.tmp' to config b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta'
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:51 np0005531754 podman[265044]: 2025-11-22 05:53:51.274718739 +0000 UTC m=+0.130652408 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:53:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:51 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 49 KiB/s wr, 7 op/s
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:52 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010924883603568405 of space, bias 4.0, pg target 0.13109860324282085 quantized to 16 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:53:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:53:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:53:55 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:53:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:53:55 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:53:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 00:53:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:53:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4847 writes, 21K keys, 4847 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4847 writes, 4847 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1497 writes, 6905 keys, 1497 commit groups, 1.0 writes per commit group, ingest: 9.50 MB, 0.02 MB/s#012Interval WAL: 1497 writes, 1497 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.23              0.10        12    0.019       0      0       0.0       0.0#012  L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    138.5    113.8      0.68              0.29        11    0.061     48K   5784       0.0       0.0#012 Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    103.7    111.6      0.90              0.39        23    0.039     48K   5784       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     96.5     97.3      0.46              0.18        10    0.046     23K   2598       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    138.5    113.8      0.68              0.29        11    0.061     48K   5784       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    109.4      0.22              0.10        11    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.023, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 304.00 MB usage: 8.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00012 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(550,8.17 MB,2.68871%) FilterBlock(24,141.98 KB,0.0456107%) IndexBlock(24,270.95 KB,0.0870403%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81a5c3f3-2894-44cc-9d89-89c13467813e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81a5c3f3-2894-44cc-9d89-89c13467813e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81a5c3f3-2894-44cc-9d89-89c13467813e' of type subvolume
Nov 22 00:53:57 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:57.055+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81a5c3f3-2894-44cc-9d89-89c13467813e' of type subvolume
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "force": true, "format": "json"}]: dispatch
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e'' moved to trashcan
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:53:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:53:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:53:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:53:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 6 op/s
Nov 22 00:53:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:53:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:53:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:53:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 00:54:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 00:54:02 np0005531754 podman[265071]: 2025-11-22 05:54:02.228748464 +0000 UTC m=+0.079033738 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 00:54:02 np0005531754 podman[265072]: 2025-11-22 05:54:02.241815861 +0000 UTC m=+0.088128716 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:02.611+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6eab6156-d31f-4c5e-8b3f-a70a75baac57' of type subvolume
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6eab6156-d31f-4c5e-8b3f-a70a75baac57' of type subvolume
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57'' moved to trashcan
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "format": "json"}]: dispatch
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 00:54:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:03 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:03 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 7 op/s
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 00:54:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:06 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 89 KiB/s wr, 10 op/s
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 22 00:54:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 22 00:54:07 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 22 00:54:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101'' moved to trashcan
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 00:54:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39fe5e7b-616f-4319-8856-cfe7e482fa98' of type subvolume
Nov 22 00:54:09 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:09.245+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39fe5e7b-616f-4319-8856-cfe7e482fa98' of type subvolume
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98'' moved to trashcan
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:54:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:10 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 00:54:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:11 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:11 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 00:54:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 00:54:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:54:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:13 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta.tmp'
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta.tmp' to config b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta'
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.166 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:15.385+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '576c395a-0c7b-4d45-a49a-9d0c63369a89' of type subvolume
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '576c395a-0c7b-4d45-a49a-9d0c63369a89' of type subvolume
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89'' moved to trashcan
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 00:54:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:54:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023680856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.656 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.849 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.850 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.851 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.851 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.985 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:54:15 np0005531754 nova_compute[255660]: 2025-11-22 05:54:15.985 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.015 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:54:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:54:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278697648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.474 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.480 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.522 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.525 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:54:16 np0005531754 nova_compute[255660]: 2025-11-22 05:54:16.526 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:54:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 10 op/s
Nov 22 00:54:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 22 00:54:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 22 00:54:17 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:18 np0005531754 nova_compute[255660]: 2025-11-22 05:54:18.528 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:18 np0005531754 nova_compute[255660]: 2025-11-22 05:54:18.528 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:18 np0005531754 nova_compute[255660]: 2025-11-22 05:54:18.529 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:18 np0005531754 nova_compute[255660]: 2025-11-22 05:54:18.529 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:18 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 97 KiB/s wr, 13 op/s
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta.tmp'
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta.tmp' to config b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta'
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:20 np0005531754 nova_compute[255660]: 2025-11-22 05:54:20.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 46 KiB/s wr, 6 op/s
Nov 22 00:54:21 np0005531754 nova_compute[255660]: 2025-11-22 05:54:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:21 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:21 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:22 np0005531754 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:22 np0005531754 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:54:22 np0005531754 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:54:22 np0005531754 nova_compute[255660]: 2025-11-22 05:54:22.156 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:54:22 np0005531754 podman[265158]: 2025-11-22 05:54:22.254714705 +0000 UTC m=+0.108342268 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48201a49-7fb5-455c-9d81-35b89fbf42a0' of type subvolume
Nov 22 00:54:22 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:22.635+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48201a49-7fb5-455c-9d81-35b89fbf42a0' of type subvolume
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0'' moved to trashcan
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 00:54:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 99 KiB/s wr, 12 op/s
Nov 22 00:54:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 22 00:54:23 np0005531754 nova_compute[255660]: 2025-11-22 05:54:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:23 np0005531754 nova_compute[255660]: 2025-11-22 05:54:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:54:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 22 00:54:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 22 00:54:24 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:24.169 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:54:24 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:24.170 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:54:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 61 KiB/s wr, 7 op/s
Nov 22 00:54:24 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:54:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:54:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:54:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:54:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:25 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:54:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:26 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:26.595+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c16f3c1-b6c6-4461-9394-db28e06b71e2' of type subvolume
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c16f3c1-b6c6-4461-9394-db28e06b71e2' of type subvolume
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2'' moved to trashcan
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 00:54:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 238 B/s rd, 50 KiB/s wr, 6 op/s
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:54:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:28 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:29 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta.tmp'
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta.tmp' to config b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta'
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:32 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:54:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:54:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:33 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:33.172 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:54:33 np0005531754 podman[265186]: 2025-11-22 05:54:33.230407872 +0000 UTC m=+0.075640395 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 22 00:54:33 np0005531754 podman[265187]: 2025-11-22 05:54:33.239603423 +0000 UTC m=+0.081369392 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:34.352+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd572fcc0-c0a8-4fe7-b2ef-39477199386e' of type subvolume
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd572fcc0-c0a8-4fe7-b2ef-39477199386e' of type subvolume
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e'' moved to trashcan
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 00:54:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 353 B/s rd, 66 KiB/s wr, 6 op/s
Nov 22 00:54:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:35 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 00:54:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:54:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:54:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.935 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:54:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:39 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta.tmp'
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta.tmp' to config b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta'
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 00:54:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 10 op/s
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "tenant_id": "ff09e2486e9d4c72b3f5e832bcf1885a", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume authorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, tenant_id:ff09e2486e9d4c72b3f5e832bcf1885a, vol_name:cephfs) < ""
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"} v 0) v1
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1135923250 with tenant ff09e2486e9d4c72b3f5e832bcf1885a
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume authorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, tenant_id:ff09e2486e9d4c72b3f5e832bcf1885a, vol_name:cephfs) < ""
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:54:43
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['images', '.mgr', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b919d90>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536ba1da60>)]
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:43 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:54:43 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2223829226
Nov 22 00:54:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 6 op/s
Nov 22 00:54:44 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.mscchl(active, since 30m)
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume deauthorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"} v 0) v1
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"} v 0) v1
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]': finished
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume deauthorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume evict, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1135923250, client_metadata.root=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0
Nov 22 00:54:45 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1135923250,client_metadata.root=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0],prefix=session evict} (starting...)
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume evict, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:45.399+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f' of type subvolume
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f' of type subvolume
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "force": true, "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f'' moved to trashcan
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:54:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]: dispatch
Nov 22 00:54:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]': finished
Nov 22 00:54:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 7 op/s
Nov 22 00:54:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:54:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:54:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:47 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:54:47 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:54:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta.tmp'
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta.tmp' to config b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta'
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:54:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 116 KiB/s wr, 12 op/s
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:54:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:49 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev fc62f115-94dc-49d9-a8cc-444a22989858 does not exist
Nov 22 00:54:49 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4ae354cc-d207-4de3-a59b-52e5be30f005 does not exist
Nov 22 00:54:49 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1179f9ca-e9f1-4f9a-a3d9-fcc977ad9fda does not exist
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:50 np0005531754 podman[265498]: 2025-11-22 05:54:50.677071544 +0000 UTC m=+0.072869881 container create deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:54:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:50 np0005531754 podman[265498]: 2025-11-22 05:54:50.644174525 +0000 UTC m=+0.039972862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:50 np0005531754 systemd[1]: Started libpod-conmon-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope.
Nov 22 00:54:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 80 KiB/s wr, 9 op/s
Nov 22 00:54:50 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:51 np0005531754 podman[265498]: 2025-11-22 05:54:51.015823698 +0000 UTC m=+0.411622065 container init deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:54:51 np0005531754 podman[265498]: 2025-11-22 05:54:51.025461452 +0000 UTC m=+0.421259819 container start deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 00:54:51 np0005531754 romantic_lamarr[265515]: 167 167
Nov 22 00:54:51 np0005531754 systemd[1]: libpod-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope: Deactivated successfully.
Nov 22 00:54:51 np0005531754 podman[265498]: 2025-11-22 05:54:51.069120444 +0000 UTC m=+0.464918811 container attach deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:54:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:54:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:51 np0005531754 podman[265498]: 2025-11-22 05:54:51.069829843 +0000 UTC m=+0.465628210 container died deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:54:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8b7890fd80b068d179889d38e6042c2e65d9595059b6d968274941080e2d01d7-merged.mount: Deactivated successfully.
Nov 22 00:54:51 np0005531754 podman[265498]: 2025-11-22 05:54:51.214643495 +0000 UTC m=+0.610441862 container remove deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 22 00:54:51 np0005531754 systemd[1]: libpod-conmon-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope: Deactivated successfully.
Nov 22 00:54:51 np0005531754 podman[265539]: 2025-11-22 05:54:51.416054483 +0000 UTC m=+0.031681195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:52 np0005531754 podman[265539]: 2025-11-22 05:54:52.082288878 +0000 UTC m=+0.697915510 container create e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:54:52 np0005531754 systemd[1]: Started libpod-conmon-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope.
Nov 22 00:54:52 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:52 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:52 np0005531754 podman[265539]: 2025-11-22 05:54:52.481678898 +0000 UTC m=+1.097305520 container init e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 00:54:52 np0005531754 podman[265539]: 2025-11-22 05:54:52.494167779 +0000 UTC m=+1.109794381 container start e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:54:52 np0005531754 podman[265539]: 2025-11-22 05:54:52.522348758 +0000 UTC m=+1.137975390 container attach e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta.tmp'
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta.tmp' to config b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta'
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:54:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:54:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00018543050400468843 of space, bias 4.0, pg target 0.2225166048056261 quantized to 16 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:54:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:54:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:53 np0005531754 podman[265560]: 2025-11-22 05:54:53.274351045 +0000 UTC m=+0.122496565 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 22 00:54:53 np0005531754 beautiful_noether[265555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:54:53 np0005531754 beautiful_noether[265555]: --> relative data size: 1.0
Nov 22 00:54:53 np0005531754 beautiful_noether[265555]: --> All data devices are unavailable
Nov 22 00:54:53 np0005531754 systemd[1]: libpod-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Deactivated successfully.
Nov 22 00:54:53 np0005531754 podman[265539]: 2025-11-22 05:54:53.688647152 +0000 UTC m=+2.304273784 container died e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 00:54:53 np0005531754 systemd[1]: libpod-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Consumed 1.134s CPU time.
Nov 22 00:54:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d-merged.mount: Deactivated successfully.
Nov 22 00:54:53 np0005531754 podman[265539]: 2025-11-22 05:54:53.782286818 +0000 UTC m=+2.397913450 container remove e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 00:54:53 np0005531754 systemd[1]: libpod-conmon-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Deactivated successfully.
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:54:54 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:54:54 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.535814505 +0000 UTC m=+0.045076321 container create 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:54:54 np0005531754 systemd[1]: Started libpod-conmon-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope.
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.517113595 +0000 UTC m=+0.026375451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:54 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.638964341 +0000 UTC m=+0.148226237 container init 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.648112651 +0000 UTC m=+0.157374507 container start 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.652009677 +0000 UTC m=+0.161271563 container attach 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:54:54 np0005531754 recursing_shockley[265780]: 167 167
Nov 22 00:54:54 np0005531754 systemd[1]: libpod-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope: Deactivated successfully.
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.654355541 +0000 UTC m=+0.163617397 container died 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:54:54 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3cf037877dc01d3c25e94789a9fac8d76f5fe5b4a8a7055cf54c3a1ffcd6d796-merged.mount: Deactivated successfully.
Nov 22 00:54:54 np0005531754 podman[265763]: 2025-11-22 05:54:54.703177334 +0000 UTC m=+0.212439200 container remove 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 00:54:54 np0005531754 systemd[1]: libpod-conmon-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope: Deactivated successfully.
Nov 22 00:54:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 00:54:54 np0005531754 podman[265804]: 2025-11-22 05:54:54.916081885 +0000 UTC m=+0.061303024 container create eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:54:54 np0005531754 systemd[1]: Started libpod-conmon-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope.
Nov 22 00:54:54 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:54 np0005531754 podman[265804]: 2025-11-22 05:54:54.895858483 +0000 UTC m=+0.041079652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:54 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:55 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:55 np0005531754 podman[265804]: 2025-11-22 05:54:55.037418677 +0000 UTC m=+0.182639826 container init eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 00:54:55 np0005531754 podman[265804]: 2025-11-22 05:54:55.050765461 +0000 UTC m=+0.195986620 container start eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 00:54:55 np0005531754 podman[265804]: 2025-11-22 05:54:55.057500265 +0000 UTC m=+0.202721404 container attach eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]: {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    "0": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "devices": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "/dev/loop3"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            ],
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_name": "ceph_lv0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_size": "21470642176",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "name": "ceph_lv0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "tags": {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_name": "ceph",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.crush_device_class": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.encrypted": "0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_id": "0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.vdo": "0"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            },
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "vg_name": "ceph_vg0"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        }
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    ],
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    "1": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "devices": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "/dev/loop4"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            ],
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_name": "ceph_lv1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_size": "21470642176",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "name": "ceph_lv1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "tags": {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_name": "ceph",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.crush_device_class": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.encrypted": "0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_id": "1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.vdo": "0"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            },
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "vg_name": "ceph_vg1"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        }
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    ],
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    "2": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "devices": [
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "/dev/loop5"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            ],
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_name": "ceph_lv2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_size": "21470642176",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "name": "ceph_lv2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "tags": {
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.cluster_name": "ceph",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.crush_device_class": "",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.encrypted": "0",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osd_id": "2",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:                "ceph.vdo": "0"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            },
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "type": "block",
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:            "vg_name": "ceph_vg2"
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:        }
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]:    ]
Nov 22 00:54:55 np0005531754 youthful_mcnulty[265821]: }
Nov 22 00:54:55 np0005531754 systemd[1]: libpod-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope: Deactivated successfully.
Nov 22 00:54:55 np0005531754 podman[265804]: 2025-11-22 05:54:55.894446109 +0000 UTC m=+1.039667248 container died eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 00:54:55 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:54:55 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:54:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10-merged.mount: Deactivated successfully.
Nov 22 00:54:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 00:54:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:54:55 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID Joe with tenant 525ba1ccf0d546c7b4118a0855306190
Nov 22 00:54:55 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:55 np0005531754 podman[265804]: 2025-11-22 05:54:55.977527777 +0000 UTC m=+1.122748916 container remove eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:54:55 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:55 np0005531754 systemd[1]: libpod-conmon-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope: Deactivated successfully.
Nov 22 00:54:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:54:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:54:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:56 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.739585537 +0000 UTC m=+0.051278020 container create 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:54:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 84 KiB/s wr, 10 op/s
Nov 22 00:54:56 np0005531754 systemd[1]: Started libpod-conmon-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope.
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.712820476 +0000 UTC m=+0.024513019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:56 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.844958223 +0000 UTC m=+0.156650766 container init 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.858985386 +0000 UTC m=+0.170677849 container start 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.862337197 +0000 UTC m=+0.174029750 container attach 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:54:56 np0005531754 friendly_snyder[265998]: 167 167
Nov 22 00:54:56 np0005531754 systemd[1]: libpod-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope: Deactivated successfully.
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.868726772 +0000 UTC m=+0.180419295 container died 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:54:56 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3da5f466677eaf521a76d3525617bcb7e593135bf9f6e1646ee20ed707b9ddef-merged.mount: Deactivated successfully.
Nov 22 00:54:56 np0005531754 podman[265982]: 2025-11-22 05:54:56.923509667 +0000 UTC m=+0.235202130 container remove 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 00:54:56 np0005531754 systemd[1]: libpod-conmon-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope: Deactivated successfully.
Nov 22 00:54:57 np0005531754 podman[266022]: 2025-11-22 05:54:57.180433859 +0000 UTC m=+0.075739308 container create 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:54:57 np0005531754 systemd[1]: Started libpod-conmon-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope.
Nov 22 00:54:57 np0005531754 podman[266022]: 2025-11-22 05:54:57.150278617 +0000 UTC m=+0.045584136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:54:57 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:54:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:57 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:54:57 np0005531754 podman[266022]: 2025-11-22 05:54:57.291786449 +0000 UTC m=+0.187091938 container init 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:54:57 np0005531754 podman[266022]: 2025-11-22 05:54:57.308104054 +0000 UTC m=+0.203409463 container start 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 00:54:57 np0005531754 podman[266022]: 2025-11-22 05:54:57.311531008 +0000 UTC m=+0.206836417 container attach 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:54:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:54:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:54:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:57 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:54:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:54:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]: {
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_id": 1,
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "type": "bluestore"
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    },
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_id": 2,
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "type": "bluestore"
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    },
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_id": 0,
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:        "type": "bluestore"
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]:    }
Nov 22 00:54:58 np0005531754 interesting_poitras[266039]: }
Nov 22 00:54:58 np0005531754 systemd[1]: libpod-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Deactivated successfully.
Nov 22 00:54:58 np0005531754 podman[266022]: 2025-11-22 05:54:58.345198111 +0000 UTC m=+1.240503570 container died 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:54:58 np0005531754 systemd[1]: libpod-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Consumed 1.041s CPU time.
Nov 22 00:54:58 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be-merged.mount: Deactivated successfully.
Nov 22 00:54:58 np0005531754 podman[266022]: 2025-11-22 05:54:58.422335397 +0000 UTC m=+1.317640826 container remove 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:54:58 np0005531754 systemd[1]: libpod-conmon-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Deactivated successfully.
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:54:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1a9be263-dd44-4ca5-b82b-540e2d3a15ee does not exist
Nov 22 00:54:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 94c3aba4-a2f5-4d6c-b1cd-7c0b0ca75f30 does not exist
Nov 22 00:54:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 118 KiB/s wr, 12 op/s
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:54:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta.tmp'
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta.tmp' to config b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta'
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:54:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:54:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:54:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:55:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:55:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:55:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:01 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 00:55:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 00:55:03 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:55:03 np0005531754 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Nov 22 00:55:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:03 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:03.030+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 22 00:55:03 np0005531754 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 22 00:55:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:03 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:55:04 np0005531754 podman[266136]: 2025-11-22 05:55:04.236672037 +0000 UTC m=+0.086895803 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 00:55:04 np0005531754 podman[266137]: 2025-11-22 05:55:04.236839681 +0000 UTC m=+0.078790501 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta.tmp'
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta.tmp' to config b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta'
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:55:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:55:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 00:55:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:05 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:05 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"} v 0) v1
Nov 22 00:55:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:06 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-758311238 with tenant 7c0b4b3107784ce6890ddd12d362ec8e
Nov 22 00:55:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 00:55:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f776c050-1471-4343-b299-6c3d96952946, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f776c050-1471-4343-b299-6c3d96952946, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:09 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:09.231+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f776c050-1471-4343-b299-6c3d96952946' of type subvolume
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f776c050-1471-4343-b299-6c3d96952946' of type subvolume
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "force": true, "format": "json"}]: dispatch
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946'' moved to trashcan
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:55:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7'
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07
Nov 22 00:55:10 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07],prefix=session evict} (starting...)
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 00:55:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:55:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:12 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:12 np0005531754 nova_compute[255660]: 2025-11-22 05:55:12.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:12 np0005531754 nova_compute[255660]: 2025-11-22 05:55:12.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 00:55:12 np0005531754 nova_compute[255660]: 2025-11-22 05:55:12.147 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 00:55:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 10 op/s
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"} v 0) v1
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"} v 0) v1
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]': finished
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-758311238, client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07
Nov 22 00:55:13 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-758311238,client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07],prefix=session evict} (starting...)
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]: dispatch
Nov 22 00:55:13 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]': finished
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.174 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.174 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.175 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.175 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.176 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986122239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.709 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:15 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 00:55:15 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.886 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.887 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.887 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.942 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.943 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:55:15 np0005531754 nova_compute[255660]: 2025-11-22 05:55:15.958 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:55:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:55:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702995535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.405 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.413 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.434 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.436 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.437 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.438 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:16 np0005531754 nova_compute[255660]: 2025-11-22 05:55:16.438 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 00:55:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) v1
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5
Nov 22 00:55:17 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5],prefix=session evict} (starting...)
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:17 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 22 00:55:17 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 22 00:55:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 00:55:19 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:19 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.427092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919427297, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2361, "num_deletes": 253, "total_data_size": 2906270, "memory_usage": 2964824, "flush_reason": "Manual Compaction"}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 22 00:55:19 np0005531754 nova_compute[255660]: 2025-11-22 05:55:19.439 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919450682, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2857694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21186, "largest_seqno": 23546, "table_properties": {"data_size": 2847377, "index_size": 6235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25247, "raw_average_key_size": 21, "raw_value_size": 2825082, "raw_average_value_size": 2382, "num_data_blocks": 276, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790776, "oldest_key_time": 1763790776, "file_creation_time": 1763790919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23542 microseconds, and 11249 cpu microseconds.
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.450754) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2857694 bytes OK
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.450792) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454151) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454190) EVENT_LOG_v1 {"time_micros": 1763790919454180, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2895562, prev total WAL file size 2895562, number of live WAL files 2.
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.455248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2790KB)], [50(7258KB)]
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919455288, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10290391, "oldest_snapshot_seqno": -1}
Nov 22 00:55:19 np0005531754 nova_compute[255660]: 2025-11-22 05:55:19.465 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:19 np0005531754 nova_compute[255660]: 2025-11-22 05:55:19.466 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:19 np0005531754 nova_compute[255660]: 2025-11-22 05:55:19.466 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:55:19 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5150 keys, 8541238 bytes, temperature: kUnknown
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919523246, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8541238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8504889, "index_size": 22351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 126893, "raw_average_key_size": 24, "raw_value_size": 8410345, "raw_average_value_size": 1633, "num_data_blocks": 932, "num_entries": 5150, "num_filter_entries": 5150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.523613) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8541238 bytes
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.525054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.2 rd, 125.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5678, records dropped: 528 output_compression: NoCompression
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.525073) EVENT_LOG_v1 {"time_micros": 1763790919525063, "job": 26, "event": "compaction_finished", "compaction_time_micros": 68066, "compaction_time_cpu_micros": 28732, "output_level": 6, "num_output_files": 1, "total_output_size": 8541238, "num_input_records": 5678, "num_output_records": 5150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919525672, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919527071, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.455191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:19 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:20 np0005531754 nova_compute[255660]: 2025-11-22 05:55:20.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:55:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) v1
Nov 22 00:55:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:55:20 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:20.824+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 00:55:20 np0005531754 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 00:55:20 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 22 00:55:21 np0005531754 nova_compute[255660]: 2025-11-22 05:55:21.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:21 np0005531754 nova_compute[255660]: 2025-11-22 05:55:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:22 np0005531754 nova_compute[255660]: 2025-11-22 05:55:22.142 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 113 KiB/s wr, 12 op/s
Nov 22 00:55:23 np0005531754 nova_compute[255660]: 2025-11-22 05:55:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:23 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:23 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:55:23 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:55:24 np0005531754 nova_compute[255660]: 2025-11-22 05:55:24.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:24 np0005531754 nova_compute[255660]: 2025-11-22 05:55:24.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:55:24 np0005531754 nova_compute[255660]: 2025-11-22 05:55:24.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:55:24 np0005531754 nova_compute[255660]: 2025-11-22 05:55:24.160 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:55:24 np0005531754 nova_compute[255660]: 2025-11-22 05:55:24.160 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:24 np0005531754 podman[266227]: 2025-11-22 05:55:24.309922647 +0000 UTC m=+0.166361861 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 00:55:24 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:24 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID david with tenant 525ba1ccf0d546c7b4118a0855306190
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 00:55:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 81 KiB/s wr, 9 op/s
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:55:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:26 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 113 KiB/s wr, 11 op/s
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta.tmp'
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta.tmp' to config b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta'
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:55:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:55:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 9 op/s
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:30 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 00:55:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 00:55:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 00:55:31 np0005531754 nova_compute[255660]: 2025-11-22 05:55:31.417 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:55:31 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:31.441 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:55:31 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:31.442 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:55:31 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 00:55:31 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:31 np0005531754 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Nov 22 00:55:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 00:55:31 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:31.867+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 22 00:55:31 np0005531754 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 22 00:55:32 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 101 KiB/s wr, 11 op/s
Nov 22 00:55:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:33 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:55:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:33 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 00:55:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:35 np0005531754 podman[266254]: 2025-11-22 05:55:35.238744606 +0000 UTC m=+0.089023671 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 00:55:35 np0005531754 podman[266255]: 2025-11-22 05:55:35.245659175 +0000 UTC m=+0.087734276 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 00:55:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3'
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/ee7c5dcf-f783-40a3-8f91-0b5e06a2a160
Nov 22 00:55:36 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/ee7c5dcf-f783-40a3-8f91-0b5e06a2a160],prefix=session evict} (starting...)
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.445 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:55:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 00:55:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.935 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:55:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:55:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:55:37 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 00:55:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:55:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:55:38 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:39 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 00:55:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) v1
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6
Nov 22 00:55:40 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6],prefix=session evict} (starting...)
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 22 00:55:40 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 22 00:55:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 5 op/s
Nov 22 00:55:41 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 00:55:41 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:41 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:41 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:41 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:55:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7325 writes, 29K keys, 7325 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 7325 writes, 1543 syncs, 4.75 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1699 writes, 5316 keys, 1699 commit groups, 1.0 writes per commit group, ingest: 7.07 MB, 0.01 MB/s#012Interval WAL: 1699 writes, 663 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 00:55:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 101 KiB/s wr, 10 op/s
Nov 22 00:55:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:43 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:43.696+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3' of type subvolume
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3' of type subvolume
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "force": true, "format": "json"}]: dispatch
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3'' moved to trashcan
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:55:43
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:55:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 00:55:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 00:55:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:55:44 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:55:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:55:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 00:55:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 00:55:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 00:55:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 00:55:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:55:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:55:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:55:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:47 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:47.358+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7' of type subvolume
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7' of type subvolume
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "force": true, "format": "json"}]: dispatch
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7'' moved to trashcan
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:55:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 00:55:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:55:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 9133 writes, 36K keys, 9133 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 9133 writes, 2084 syncs, 4.38 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2182 writes, 7209 keys, 2182 commit groups, 1.0 writes per commit group, ingest: 7.59 MB, 0.01 MB/s#012Interval WAL: 2182 writes, 839 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:48 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:55:48 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:55:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2' of type subvolume
Nov 22 00:55:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:50.891+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2' of type subvolume
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "force": true, "format": "json"}]: dispatch
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2'' moved to trashcan
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:55:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 KiB/s wr, 11 op/s
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta.tmp'
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta.tmp' to config b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta'
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "format": "json"}]: dispatch
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:55:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:55:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:55:52 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00025697005030279353 of space, bias 4.0, pg target 0.30836406036335223 quantized to 16 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:55:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:55:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 00:55:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s#012Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "format": "json"}]: dispatch
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:54.650+0000 7f5339360640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d03444f-9989-4f30-9672-a2032459f666, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d03444f-9989-4f30-9672-a2032459f666, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:54.737+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d03444f-9989-4f30-9672-a2032459f666' of type subvolume
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d03444f-9989-4f30-9672-a2032459f666' of type subvolume
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "force": true, "format": "json"}]: dispatch
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666'' moved to trashcan
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 00:55:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 55 KiB/s wr, 6 op/s
Nov 22 00:55:55 np0005531754 podman[266298]: 2025-11-22 05:55:55.28131618 +0000 UTC m=+0.132049616 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:55:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 7 op/s
Nov 22 00:55:56 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 00:55:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]} v 0) v1
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]': finished
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]: dispatch
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]': finished
Nov 22 00:55:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:55:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:55:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:55:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 00:56:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:56:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 00:56:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:56:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]} v 0) v1
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]: dispatch
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]': finished
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2
Nov 22 00:56:01 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2],prefix=session evict} (starting...)
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:56:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.69245882 +0000 UTC m=+0.082895714 container create 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.633444518 +0000 UTC m=+0.023881442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:01 np0005531754 systemd[1]: Started libpod-conmon-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope.
Nov 22 00:56:01 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.913002269 +0000 UTC m=+0.303439193 container init 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.922452687 +0000 UTC m=+0.312889581 container start 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 00:56:01 np0005531754 flamboyant_knuth[266853]: 167 167
Nov 22 00:56:01 np0005531754 systemd[1]: libpod-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope: Deactivated successfully.
Nov 22 00:56:01 np0005531754 conmon[266853]: conmon 036599f02dd36edab434 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope/container/memory.events
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.940284104 +0000 UTC m=+0.330721038 container attach 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:56:01 np0005531754 podman[266837]: 2025-11-22 05:56:01.942210316 +0000 UTC m=+0.332647210 container died 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:56:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:56:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]: dispatch
Nov 22 00:56:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:02 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]': finished
Nov 22 00:56:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-7bc461d74da757971882d6a4baadf1673e402be24a34fe90448209a187658775-merged.mount: Deactivated successfully.
Nov 22 00:56:02 np0005531754 podman[266837]: 2025-11-22 05:56:02.244746884 +0000 UTC m=+0.635183788 container remove 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:56:02 np0005531754 systemd[1]: libpod-conmon-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope: Deactivated successfully.
Nov 22 00:56:02 np0005531754 podman[266878]: 2025-11-22 05:56:02.478280648 +0000 UTC m=+0.068474180 container create 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:56:02 np0005531754 systemd[1]: Started libpod-conmon-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope.
Nov 22 00:56:02 np0005531754 podman[266878]: 2025-11-22 05:56:02.438222275 +0000 UTC m=+0.028415767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:02 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:02 np0005531754 podman[266878]: 2025-11-22 05:56:02.598910541 +0000 UTC m=+0.189104033 container init 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:56:02 np0005531754 podman[266878]: 2025-11-22 05:56:02.610136497 +0000 UTC m=+0.200329999 container start 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:02 np0005531754 podman[266878]: 2025-11-22 05:56:02.616074719 +0000 UTC m=+0.206268231 container attach 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 00:56:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 7 op/s
Nov 22 00:56:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:04 np0005531754 frosty_austin[266896]: [
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:    {
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "available": false,
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "ceph_device": false,
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "lsm_data": {},
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "lvs": [],
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "path": "/dev/sr0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "rejected_reasons": [
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "Has a FileSystem",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "Insufficient space (<5GB)"
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        ],
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        "sys_api": {
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "actuators": null,
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "device_nodes": "sr0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "devname": "sr0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "human_readable_size": "482.00 KB",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "id_bus": "ata",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "model": "QEMU DVD-ROM",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "nr_requests": "2",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "parent": "/dev/sr0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "partitions": {},
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "path": "/dev/sr0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "removable": "1",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "rev": "2.5+",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "ro": "0",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "rotational": "1",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "sas_address": "",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "sas_device_handle": "",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "scheduler_mode": "mq-deadline",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "sectors": 0,
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "sectorsize": "2048",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "size": 493568.0,
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "support_discard": "2048",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "type": "disk",
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:            "vendor": "QEMU"
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:        }
Nov 22 00:56:04 np0005531754 frosty_austin[266896]:    }
Nov 22 00:56:04 np0005531754 frosty_austin[266896]: ]
Nov 22 00:56:04 np0005531754 systemd[1]: libpod-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Deactivated successfully.
Nov 22 00:56:04 np0005531754 systemd[1]: libpod-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Consumed 1.577s CPU time.
Nov 22 00:56:04 np0005531754 podman[268726]: 2025-11-22 05:56:04.207392674 +0000 UTC m=+0.042249664 container died 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b-merged.mount: Deactivated successfully.
Nov 22 00:56:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 4 op/s
Nov 22 00:56:05 np0005531754 podman[268726]: 2025-11-22 05:56:05.029871943 +0000 UTC m=+0.864728983 container remove 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:56:05 np0005531754 systemd[1]: libpod-conmon-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Deactivated successfully.
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 67dee7db-6e16-4aa2-a8a2-b49037d7057f does not exist
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3408f30e-63c9-450e-9d1a-ef88768e8522 does not exist
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c44dff1a-dff0-4db4-92f9-f1c318c4c189 does not exist
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:56:05 np0005531754 podman[268766]: 2025-11-22 05:56:05.447727298 +0000 UTC m=+0.078063682 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:56:05 np0005531754 podman[268765]: 2025-11-22 05:56:05.463503489 +0000 UTC m=+0.095061146 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) v1
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 00:56:05 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:56:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:05 np0005531754 podman[268923]: 2025-11-22 05:56:05.983218967 +0000 UTC m=+0.044449468 container create 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:56:06 np0005531754 systemd[1]: Started libpod-conmon-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope.
Nov 22 00:56:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:05.964459127 +0000 UTC m=+0.025689628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:06.075725174 +0000 UTC m=+0.136955735 container init 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:06.084373535 +0000 UTC m=+0.145604026 container start 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:06.088456444 +0000 UTC m=+0.149687015 container attach 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:56:06 np0005531754 laughing_leakey[268939]: 167 167
Nov 22 00:56:06 np0005531754 systemd[1]: libpod-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope: Deactivated successfully.
Nov 22 00:56:06 np0005531754 conmon[268939]: conmon 7783a27151ee2a86a626 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope/container/memory.events
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:06.09280864 +0000 UTC m=+0.154039141 container died 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 00:56:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-6c0df53d637e25280966a1034e26b40383181424938d6f228a1747260f7cfd27-merged.mount: Deactivated successfully.
Nov 22 00:56:06 np0005531754 podman[268923]: 2025-11-22 05:56:06.198627083 +0000 UTC m=+0.259857584 container remove 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 22 00:56:06 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 22 00:56:06 np0005531754 systemd[1]: libpod-conmon-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope: Deactivated successfully.
Nov 22 00:56:06 np0005531754 podman[268963]: 2025-11-22 05:56:06.437633921 +0000 UTC m=+0.059781757 container create 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 00:56:06 np0005531754 systemd[1]: Started libpod-conmon-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope.
Nov 22 00:56:06 np0005531754 podman[268963]: 2025-11-22 05:56:06.408550074 +0000 UTC m=+0.030697990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:06 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:06 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:06 np0005531754 podman[268963]: 2025-11-22 05:56:06.548133239 +0000 UTC m=+0.170281115 container init 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:06 np0005531754 podman[268963]: 2025-11-22 05:56:06.56506702 +0000 UTC m=+0.187214866 container start 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:06 np0005531754 podman[268963]: 2025-11-22 05:56:06.569645462 +0000 UTC m=+0.191793318 container attach 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Nov 22 00:56:07 np0005531754 objective_brahmagupta[268979]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:56:07 np0005531754 objective_brahmagupta[268979]: --> relative data size: 1.0
Nov 22 00:56:07 np0005531754 objective_brahmagupta[268979]: --> All data devices are unavailable
Nov 22 00:56:07 np0005531754 systemd[1]: libpod-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Deactivated successfully.
Nov 22 00:56:07 np0005531754 podman[268963]: 2025-11-22 05:56:07.787757661 +0000 UTC m=+1.409905507 container died 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:56:07 np0005531754 systemd[1]: libpod-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Consumed 1.167s CPU time.
Nov 22 00:56:07 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7-merged.mount: Deactivated successfully.
Nov 22 00:56:07 np0005531754 podman[268963]: 2025-11-22 05:56:07.859980027 +0000 UTC m=+1.482127843 container remove 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:56:07 np0005531754 systemd[1]: libpod-conmon-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Deactivated successfully.
Nov 22 00:56:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.518704662 +0000 UTC m=+0.047268292 container create 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:56:08 np0005531754 systemd[1]: Started libpod-conmon-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope.
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.493828618 +0000 UTC m=+0.022392278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.649262185 +0000 UTC m=+0.177825895 container init 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.658300927 +0000 UTC m=+0.186864557 container start 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:56:08 np0005531754 zealous_mclean[269176]: 167 167
Nov 22 00:56:08 np0005531754 systemd[1]: libpod-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope: Deactivated successfully.
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.679976025 +0000 UTC m=+0.208539745 container attach 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.680815638 +0000 UTC m=+0.209379328 container died 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f5f3ee24f5645f64960b75b56e229300e3a62361c4437be8b1fa03ce0db11f7a-merged.mount: Deactivated successfully.
Nov 22 00:56:08 np0005531754 podman[269160]: 2025-11-22 05:56:08.772535184 +0000 UTC m=+0.301098844 container remove 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:56:08 np0005531754 systemd[1]: libpod-conmon-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope: Deactivated successfully.
Nov 22 00:56:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 58 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 5 op/s
Nov 22 00:56:08 np0005531754 podman[269202]: 2025-11-22 05:56:08.990366536 +0000 UTC m=+0.055129272 container create c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:56:09 np0005531754 systemd[1]: Started libpod-conmon-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope.
Nov 22 00:56:09 np0005531754 podman[269202]: 2025-11-22 05:56:08.963852049 +0000 UTC m=+0.028614835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:09 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:09 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:09 np0005531754 podman[269202]: 2025-11-22 05:56:09.113973024 +0000 UTC m=+0.178735780 container init c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:56:09 np0005531754 podman[269202]: 2025-11-22 05:56:09.122234634 +0000 UTC m=+0.186997400 container start c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 00:56:09 np0005531754 podman[269202]: 2025-11-22 05:56:09.132288782 +0000 UTC m=+0.197051538 container attach c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:56:09 np0005531754 trusting_carver[269219]: {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    "0": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "devices": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "/dev/loop3"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            ],
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_name": "ceph_lv0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_size": "21470642176",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "name": "ceph_lv0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "tags": {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_name": "ceph",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.crush_device_class": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.encrypted": "0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_id": "0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.vdo": "0"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            },
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "vg_name": "ceph_vg0"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        }
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    ],
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    "1": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "devices": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "/dev/loop4"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            ],
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_name": "ceph_lv1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_size": "21470642176",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "name": "ceph_lv1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "tags": {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_name": "ceph",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.crush_device_class": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.encrypted": "0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_id": "1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.vdo": "0"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            },
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "vg_name": "ceph_vg1"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        }
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    ],
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    "2": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "devices": [
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "/dev/loop5"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            ],
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_name": "ceph_lv2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_size": "21470642176",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "name": "ceph_lv2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "tags": {
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.cluster_name": "ceph",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.crush_device_class": "",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.encrypted": "0",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osd_id": "2",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:                "ceph.vdo": "0"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            },
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "type": "block",
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:            "vg_name": "ceph_vg2"
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:        }
Nov 22 00:56:09 np0005531754 trusting_carver[269219]:    ]
Nov 22 00:56:09 np0005531754 trusting_carver[269219]: }
Nov 22 00:56:09 np0005531754 systemd[1]: libpod-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope: Deactivated successfully.
Nov 22 00:56:09 np0005531754 podman[269202]: 2025-11-22 05:56:09.879098947 +0000 UTC m=+0.943861743 container died c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76-merged.mount: Deactivated successfully.
Nov 22 00:56:10 np0005531754 podman[269202]: 2025-11-22 05:56:10.679548822 +0000 UTC m=+1.744311588 container remove c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 00:56:10 np0005531754 systemd[1]: libpod-conmon-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope: Deactivated successfully.
Nov 22 00:56:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.521189496 +0000 UTC m=+0.102558257 container create 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.458284028 +0000 UTC m=+0.039652849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:11 np0005531754 systemd[1]: Started libpod-conmon-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope.
Nov 22 00:56:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.66111402 +0000 UTC m=+0.242482831 container init 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.669579066 +0000 UTC m=+0.250947827 container start 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 00:56:11 np0005531754 bold_jennings[269399]: 167 167
Nov 22 00:56:11 np0005531754 systemd[1]: libpod-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope: Deactivated successfully.
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.703582993 +0000 UTC m=+0.284951724 container attach 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.703896862 +0000 UTC m=+0.285265593 container died 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-797dea0ba4dd26d39a4b861e03d9b20e7ac13087b39f8cc6ac29da5e43be5e2b-merged.mount: Deactivated successfully.
Nov 22 00:56:11 np0005531754 podman[269383]: 2025-11-22 05:56:11.868117473 +0000 UTC m=+0.449486224 container remove 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:11 np0005531754 systemd[1]: libpod-conmon-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope: Deactivated successfully.
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "format": "json"}]: dispatch
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5fe2732a-575f-4985-a0be-d017e158a52a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5fe2732a-575f-4985-a0be-d017e158a52a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:12 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:12.153+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fe2732a-575f-4985-a0be-d017e158a52a' of type subvolume
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fe2732a-575f-4985-a0be-d017e158a52a' of type subvolume
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a'' moved to trashcan
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 00:56:12 np0005531754 podman[269425]: 2025-11-22 05:56:12.082854251 +0000 UTC m=+0.031409508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:56:12 np0005531754 podman[269425]: 2025-11-22 05:56:12.262657428 +0000 UTC m=+0.211212595 container create 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 00:56:12 np0005531754 systemd[1]: Started libpod-conmon-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope.
Nov 22 00:56:12 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:56:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:12 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:56:12 np0005531754 podman[269425]: 2025-11-22 05:56:12.445141297 +0000 UTC m=+0.393696534 container init 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:56:12 np0005531754 podman[269425]: 2025-11-22 05:56:12.460324862 +0000 UTC m=+0.408880049 container start 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:56:12 np0005531754 podman[269425]: 2025-11-22 05:56:12.592232831 +0000 UTC m=+0.540788028 container attach 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 00:56:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 42 KiB/s wr, 4 op/s
Nov 22 00:56:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:13 np0005531754 silly_curran[269442]: {
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_id": 1,
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "type": "bluestore"
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    },
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_id": 2,
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "type": "bluestore"
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    },
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_id": 0,
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:56:13 np0005531754 silly_curran[269442]:        "type": "bluestore"
Nov 22 00:56:13 np0005531754 silly_curran[269442]:    }
Nov 22 00:56:13 np0005531754 silly_curran[269442]: }
Nov 22 00:56:13 np0005531754 systemd[1]: libpod-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Deactivated successfully.
Nov 22 00:56:13 np0005531754 systemd[1]: libpod-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Consumed 1.102s CPU time.
Nov 22 00:56:13 np0005531754 podman[269475]: 2025-11-22 05:56:13.593944877 +0000 UTC m=+0.023754735 container died 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 00:56:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1-merged.mount: Deactivated successfully.
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:13 np0005531754 podman[269475]: 2025-11-22 05:56:13.827529269 +0000 UTC m=+0.257339057 container remove 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:56:13 np0005531754 systemd[1]: libpod-conmon-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Deactivated successfully.
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:56:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:56:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev bc40a5d8-3215-4baf-816a-e082b58f5f8d does not exist
Nov 22 00:56:14 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 94c08024-d18c-45a0-ad5e-5d1bcf3c4689 does not exist
Nov 22 00:56:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 36 KiB/s wr, 3 op/s
Nov 22 00:56:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:14 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:56:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:56:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/401438297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.619 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.784 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:56:15 np0005531754 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.397 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.397 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.633 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.699 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.700 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.717 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.734 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 00:56:16 np0005531754 nova_compute[255660]: 2025-11-22 05:56:16.751 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:56:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 45 KiB/s wr, 4 op/s
Nov 22 00:56:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:56:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968414510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:56:17 np0005531754 nova_compute[255660]: 2025-11-22 05:56:17.221 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:56:17 np0005531754 nova_compute[255660]: 2025-11-22 05:56:17.227 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:56:17 np0005531754 nova_compute[255660]: 2025-11-22 05:56:17.248 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:56:17 np0005531754 nova_compute[255660]: 2025-11-22 05:56:17.250 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:56:17 np0005531754 nova_compute[255660]: 2025-11-22 05:56:17.250 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.317312) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978317350, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1045, "num_deletes": 257, "total_data_size": 1168468, "memory_usage": 1191408, "flush_reason": "Manual Compaction"}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978326323, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1122572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23547, "largest_seqno": 24591, "table_properties": {"data_size": 1117614, "index_size": 2354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11745, "raw_average_key_size": 19, "raw_value_size": 1107103, "raw_average_value_size": 1835, "num_data_blocks": 106, "num_entries": 603, "num_filter_entries": 603, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790919, "oldest_key_time": 1763790919, "file_creation_time": 1763790978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9056 microseconds, and 5552 cpu microseconds.
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.326368) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1122572 bytes OK
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.326389) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328545) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328571) EVENT_LOG_v1 {"time_micros": 1763790978328563, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1163248, prev total WAL file size 1163248, number of live WAL files 2.
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.329391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1096KB)], [53(8341KB)]
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978329443, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9663810, "oldest_snapshot_seqno": -1}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5224 keys, 9567487 bytes, temperature: kUnknown
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978434117, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9567487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528949, "index_size": 24348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 129901, "raw_average_key_size": 24, "raw_value_size": 9431517, "raw_average_value_size": 1805, "num_data_blocks": 1017, "num_entries": 5224, "num_filter_entries": 5224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.434539) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9567487 bytes
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.436643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.2 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(17.1) write-amplify(8.5) OK, records in: 5753, records dropped: 529 output_compression: NoCompression
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.436671) EVENT_LOG_v1 {"time_micros": 1763790978436658, "job": 28, "event": "compaction_finished", "compaction_time_micros": 104813, "compaction_time_cpu_micros": 22400, "output_level": 6, "num_output_files": 1, "total_output_size": 9567487, "num_input_records": 5753, "num_output_records": 5224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978437230, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978440235, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.329297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:56:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 27 KiB/s wr, 3 op/s
Nov 22 00:56:20 np0005531754 nova_compute[255660]: 2025-11-22 05:56:20.251 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:20 np0005531754 nova_compute[255660]: 2025-11-22 05:56:20.251 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:20 np0005531754 nova_compute[255660]: 2025-11-22 05:56:20.252 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:56:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 3 op/s
Nov 22 00:56:22 np0005531754 nova_compute[255660]: 2025-11-22 05:56:22.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 14 KiB/s wr, 2 op/s
Nov 22 00:56:23 np0005531754 nova_compute[255660]: 2025-11-22 05:56:23.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:23 np0005531754 nova_compute[255660]: 2025-11-22 05:56:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:24 np0005531754 nova_compute[255660]: 2025-11-22 05:56:24.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 00:56:25 np0005531754 nova_compute[255660]: 2025-11-22 05:56:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:25 np0005531754 nova_compute[255660]: 2025-11-22 05:56:25.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:56:25 np0005531754 nova_compute[255660]: 2025-11-22 05:56:25.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:56:25 np0005531754 nova_compute[255660]: 2025-11-22 05:56:25.545 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:56:25 np0005531754 nova_compute[255660]: 2025-11-22 05:56:25.546 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:56:26 np0005531754 podman[269585]: 2025-11-22 05:56:26.298297902 +0000 UTC m=+0.147247390 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 00:56:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta.tmp'
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta.tmp' to config b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta'
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:56:27 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:56:27 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:27 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.4 KiB/s wr, 1 op/s
Nov 22 00:56:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta.tmp'
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta.tmp' to config b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta'
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 00:56:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:34 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.400 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:56:34 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.402 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:56:34 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.403 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta.tmp'
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta.tmp' to config b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta'
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 00:56:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:56:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:35 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:36 np0005531754 podman[269610]: 2025-11-22 05:56:36.202224246 +0000 UTC m=+0.060258549 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 00:56:36 np0005531754 podman[269611]: 2025-11-22 05:56:36.24435412 +0000 UTC m=+0.089161360 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:56:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 22 00:56:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:56:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:56:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:56:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:39.030+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e91eaa5-0ca4-4703-941f-d4b008c28620' of type subvolume
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e91eaa5-0ca4-4703-941f-d4b008c28620' of type subvolume
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620'' moved to trashcan
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67
Nov 22 00:56:39 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67],prefix=session evict} (starting...)
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:39.572+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e38d87a-ae0f-4d08-9b46-1181605e24ce' of type subvolume
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e38d87a-ae0f-4d08-9b46-1181605e24ce' of type subvolume
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce'' moved to trashcan
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:56:39 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:56:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 4 op/s
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta.tmp'
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta.tmp' to config b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta'
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta.tmp'
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta.tmp' to config b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta'
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:56:43
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms']
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:56:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:56:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta.tmp'
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta.tmp' to config b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta'
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:46 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:46.976+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5465065f-2d60-4371-98a8-d41c3f15e3e4' of type subvolume
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5465065f-2d60-4371-98a8-d41c3f15e3e4' of type subvolume
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4'' moved to trashcan
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 00:56:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:56:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:56:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:56:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:56:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta.tmp'
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta.tmp' to config b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta'
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:56:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:56:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c
Nov 22 00:56:50 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c],prefix=session evict} (starting...)
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b91da8df-240a-407a-a34e-98bfc943cf90, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b91da8df-240a-407a-a34e-98bfc943cf90, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:50.397+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b91da8df-240a-407a-a34e-98bfc943cf90' of type subvolume
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b91da8df-240a-407a-a34e-98bfc943cf90' of type subvolume
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90'' moved to trashcan
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 00:56:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 8 op/s
Nov 22 00:56:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:56:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:52 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:52.639+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1c1ce2-dc9b-48d6-be0f-cc790f23422a' of type subvolume
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1c1ce2-dc9b-48d6-be0f-cc790f23422a' of type subvolume
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a'' moved to trashcan
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 00:56:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00029092748827896074 of space, bias 4.0, pg target 0.3491129859347529 quantized to 16 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:56:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta.tmp'
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta.tmp' to config b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta'
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:56:53 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:56:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:56:53 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:56:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 46 KiB/s wr, 5 op/s
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:56:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:56.105+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a73a7b16-fd1c-4116-9ec6-189608a7680b' of type subvolume
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a73a7b16-fd1c-4116-9ec6-189608a7680b' of type subvolume
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "force": true, "format": "json"}]: dispatch
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b'' moved to trashcan
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 00:56:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 8 op/s
Nov 22 00:56:57 np0005531754 podman[269651]: 2025-11-22 05:56:57.258961478 +0000 UTC m=+0.089592562 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 00:56:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:56:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:57 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:56:57 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:56:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:56:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 22 00:56:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:56:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 91 KiB/s wr, 8 op/s
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc
Nov 22 00:57:01 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc],prefix=session evict} (starting...)
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta.tmp'
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta.tmp' to config b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta'
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '194fe9d7-4252-42b8-9e5b-0b7a3e0b3774' of type subvolume
Nov 22 00:57:01 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:01.307+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '194fe9d7-4252-42b8-9e5b-0b7a3e0b3774' of type subvolume
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774'' moved to trashcan
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:01 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:01 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 8 op/s
Nov 22 00:57:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta.tmp'
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta.tmp' to config b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta'
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 57 KiB/s wr, 6 op/s
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:04 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta.tmp'
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta.tmp' to config b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta'
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 103 KiB/s wr, 10 op/s
Nov 22 00:57:07 np0005531754 podman[269679]: 2025-11-22 05:57:07.221759306 +0000 UTC m=+0.077529148 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 00:57:07 np0005531754 podman[269680]: 2025-11-22 05:57:07.253099466 +0000 UTC m=+0.098807869 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 00:57:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1a737af8-04c5-43cf-b788-696f3029c8ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1a737af8-04c5-43cf-b788-696f3029c8ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:08.423+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a737af8-04c5-43cf-b788-696f3029c8ea' of type subvolume
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a737af8-04c5-43cf-b788-696f3029c8ea' of type subvolume
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea'' moved to trashcan
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 00:57:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta.tmp'
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta.tmp' to config b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta'
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 109 KiB/s wr, 9 op/s
Nov 22 00:57:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:57:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:57:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:57:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta.tmp'
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta.tmp' to config b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta'
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:12 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 9 op/s
Nov 22 00:57:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta.tmp'
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta.tmp' to config b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta'
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 84 KiB/s wr, 8 op/s
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.157 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4c4c6774-164a-465a-9d86-6647f907b45e does not exist
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev dea47322-d282-4fd0-9004-95a64b6fd500 does not exist
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev ad0208a5-a914-4bc8-a269-21df8dfdf088 does not exist
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 00:57:15 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:31743d3b-c309-45ea-a481-74ddccf572f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:31743d3b-c309-45ea-a481-74ddccf572f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:15.523+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '31743d3b-c309-45ea-a481-74ddccf572f4' of type subvolume
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '31743d3b-c309-45ea-a481-74ddccf572f4' of type subvolume
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4'' moved to trashcan
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:15 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:57:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595235561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.620 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.777 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.854 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.854 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:57:15 np0005531754 nova_compute[255660]: 2025-11-22 05:57:15.879 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:57:15 np0005531754 podman[270016]: 2025-11-22 05:57:15.961818778 +0000 UTC m=+0.052932199 container create e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:15 np0005531754 systemd[1]: Started libpod-conmon-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope.
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:15.937011853 +0000 UTC m=+0.028125314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:16.055136588 +0000 UTC m=+0.146250029 container init e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:16.062319971 +0000 UTC m=+0.153433382 container start e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:16.066461982 +0000 UTC m=+0.157575423 container attach e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:57:16 np0005531754 quizzical_goldberg[270050]: 167 167
Nov 22 00:57:16 np0005531754 systemd[1]: libpod-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope: Deactivated successfully.
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:16.070621713 +0000 UTC m=+0.161735134 container died e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:16 np0005531754 systemd[1]: var-lib-containers-storage-overlay-14f91019d298f1fe8079ddd01b23674bb94c2fffd933e6eb0d28549cb86f673a-merged.mount: Deactivated successfully.
Nov 22 00:57:16 np0005531754 podman[270016]: 2025-11-22 05:57:16.117039447 +0000 UTC m=+0.208152858 container remove e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:57:16 np0005531754 systemd[1]: libpod-conmon-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope: Deactivated successfully.
Nov 22 00:57:16 np0005531754 podman[270075]: 2025-11-22 05:57:16.302947259 +0000 UTC m=+0.050822703 container create 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:57:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870850556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:57:16 np0005531754 systemd[1]: Started libpod-conmon-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope.
Nov 22 00:57:16 np0005531754 nova_compute[255660]: 2025-11-22 05:57:16.366 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:57:16 np0005531754 podman[270075]: 2025-11-22 05:57:16.279057979 +0000 UTC m=+0.026933453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:16 np0005531754 nova_compute[255660]: 2025-11-22 05:57:16.374 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:57:16 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:16 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:16 np0005531754 nova_compute[255660]: 2025-11-22 05:57:16.390 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:57:16 np0005531754 podman[270075]: 2025-11-22 05:57:16.391859482 +0000 UTC m=+0.139734936 container init 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 00:57:16 np0005531754 nova_compute[255660]: 2025-11-22 05:57:16.393 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:57:16 np0005531754 nova_compute[255660]: 2025-11-22 05:57:16.393 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:57:16 np0005531754 podman[270075]: 2025-11-22 05:57:16.402835016 +0000 UTC m=+0.150710450 container start 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:57:16 np0005531754 podman[270075]: 2025-11-22 05:57:16.408459077 +0000 UTC m=+0.156334541 container attach 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:57:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 116 KiB/s wr, 29 op/s
Nov 22 00:57:17 np0005531754 friendly_hamilton[270094]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:57:17 np0005531754 friendly_hamilton[270094]: --> relative data size: 1.0
Nov 22 00:57:17 np0005531754 friendly_hamilton[270094]: --> All data devices are unavailable
Nov 22 00:57:17 np0005531754 systemd[1]: libpod-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Deactivated successfully.
Nov 22 00:57:17 np0005531754 podman[270075]: 2025-11-22 05:57:17.519168079 +0000 UTC m=+1.267043553 container died 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 00:57:17 np0005531754 systemd[1]: libpod-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Consumed 1.060s CPU time.
Nov 22 00:57:17 np0005531754 systemd[1]: var-lib-containers-storage-overlay-542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91-merged.mount: Deactivated successfully.
Nov 22 00:57:17 np0005531754 podman[270075]: 2025-11-22 05:57:17.599929193 +0000 UTC m=+1.347804657 container remove 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 00:57:17 np0005531754 systemd[1]: libpod-conmon-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Deactivated successfully.
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:18.076+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd' of type subvolume
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd' of type subvolume
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd'' moved to trashcan
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.46077268 +0000 UTC m=+0.047146254 container create 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:57:18 np0005531754 systemd[1]: Started libpod-conmon-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope.
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.442030818 +0000 UTC m=+0.028404472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.566552974 +0000 UTC m=+0.152926628 container init 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.57946534 +0000 UTC m=+0.165838934 container start 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.584019242 +0000 UTC m=+0.170392916 container attach 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 00:57:18 np0005531754 boring_faraday[270291]: 167 167
Nov 22 00:57:18 np0005531754 systemd[1]: libpod-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope: Deactivated successfully.
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.586953011 +0000 UTC m=+0.173326595 container died 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 00:57:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c5eb75ba97bb0029f4bb284ee2b49bba14268927aef415dae37f76a5f82aa6bd-merged.mount: Deactivated successfully.
Nov 22 00:57:18 np0005531754 podman[270275]: 2025-11-22 05:57:18.627393264 +0000 UTC m=+0.213766828 container remove 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 systemd[1]: libpod-conmon-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope: Deactivated successfully.
Nov 22 00:57:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:18 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:18 np0005531754 podman[270315]: 2025-11-22 05:57:18.828253787 +0000 UTC m=+0.055302524 container create 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:57:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 73 KiB/s wr, 45 op/s
Nov 22 00:57:18 np0005531754 systemd[1]: Started libpod-conmon-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope.
Nov 22 00:57:18 np0005531754 podman[270315]: 2025-11-22 05:57:18.799165807 +0000 UTC m=+0.026214604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:18 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:18 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:18 np0005531754 podman[270315]: 2025-11-22 05:57:18.932352526 +0000 UTC m=+0.159401313 container init 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:57:18 np0005531754 podman[270315]: 2025-11-22 05:57:18.947902952 +0000 UTC m=+0.174951679 container start 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:57:18 np0005531754 podman[270315]: 2025-11-22 05:57:18.95264398 +0000 UTC m=+0.179692707 container attach 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]: {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    "0": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "devices": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "/dev/loop3"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            ],
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_name": "ceph_lv0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_size": "21470642176",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "name": "ceph_lv0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "tags": {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_name": "ceph",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.crush_device_class": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.encrypted": "0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_id": "0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.vdo": "0"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            },
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "vg_name": "ceph_vg0"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        }
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    ],
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    "1": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "devices": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "/dev/loop4"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            ],
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_name": "ceph_lv1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_size": "21470642176",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "name": "ceph_lv1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "tags": {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_name": "ceph",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.crush_device_class": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.encrypted": "0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_id": "1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.vdo": "0"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            },
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "vg_name": "ceph_vg1"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        }
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    ],
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    "2": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "devices": [
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "/dev/loop5"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            ],
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_name": "ceph_lv2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_size": "21470642176",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "name": "ceph_lv2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "tags": {
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.cluster_name": "ceph",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.crush_device_class": "",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.encrypted": "0",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osd_id": "2",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:                "ceph.vdo": "0"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            },
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "type": "block",
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:            "vg_name": "ceph_vg2"
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:        }
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]:    ]
Nov 22 00:57:19 np0005531754 hardcore_nash[270332]: }
Nov 22 00:57:19 np0005531754 systemd[1]: libpod-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope: Deactivated successfully.
Nov 22 00:57:19 np0005531754 podman[270315]: 2025-11-22 05:57:19.686651109 +0000 UTC m=+0.913699816 container died 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:57:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c-merged.mount: Deactivated successfully.
Nov 22 00:57:19 np0005531754 podman[270315]: 2025-11-22 05:57:19.754090716 +0000 UTC m=+0.981139413 container remove 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:57:19 np0005531754 systemd[1]: libpod-conmon-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope: Deactivated successfully.
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.39988976 +0000 UTC m=+0.055882238 container create e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 00:57:20 np0005531754 systemd[1]: Started libpod-conmon-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope.
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.372203959 +0000 UTC m=+0.028196487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.493902799 +0000 UTC m=+0.149895287 container init e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.504060332 +0000 UTC m=+0.160052800 container start e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.508215964 +0000 UTC m=+0.164208502 container attach e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 00:57:20 np0005531754 goofy_taussig[270512]: 167 167
Nov 22 00:57:20 np0005531754 systemd[1]: libpod-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope: Deactivated successfully.
Nov 22 00:57:20 np0005531754 conmon[270512]: conmon e02c124e88603d477e89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope/container/memory.events
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.513015422 +0000 UTC m=+0.169007890 container died e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c0f667201e2f4437faab8a71dbf4c982daf55fcb5ce70bc0ab21923b13ac0e2f-merged.mount: Deactivated successfully.
Nov 22 00:57:20 np0005531754 podman[270496]: 2025-11-22 05:57:20.56294253 +0000 UTC m=+0.218934978 container remove e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:57:20 np0005531754 systemd[1]: libpod-conmon-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope: Deactivated successfully.
Nov 22 00:57:20 np0005531754 podman[270536]: 2025-11-22 05:57:20.756950358 +0000 UTC m=+0.055691402 container create 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:57:20 np0005531754 systemd[1]: Started libpod-conmon-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope.
Nov 22 00:57:20 np0005531754 podman[270536]: 2025-11-22 05:57:20.74023879 +0000 UTC m=+0.038979864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:57:20 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:57:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 96 KiB/s wr, 68 op/s
Nov 22 00:57:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:20 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:57:20 np0005531754 podman[270536]: 2025-11-22 05:57:20.857997707 +0000 UTC m=+0.156738831 container init 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 00:57:20 np0005531754 podman[270536]: 2025-11-22 05:57:20.871153349 +0000 UTC m=+0.169894433 container start 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:57:20 np0005531754 podman[270536]: 2025-11-22 05:57:20.876467412 +0000 UTC m=+0.175208486 container attach 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:57:21 np0005531754 nova_compute[255660]: 2025-11-22 05:57:21.395 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:21 np0005531754 nova_compute[255660]: 2025-11-22 05:57:21.420 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:21 np0005531754 nova_compute[255660]: 2025-11-22 05:57:21.420 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:21 np0005531754 nova_compute[255660]: 2025-11-22 05:57:21.421 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:21 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:21.732+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4fc8a89-bd59-4c0a-8c81-0de4fa453851' of type subvolume
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4fc8a89-bd59-4c0a-8c81-0de4fa453851' of type subvolume
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851'' moved to trashcan
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]: {
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_id": 1,
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "type": "bluestore"
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    },
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_id": 2,
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "type": "bluestore"
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    },
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_id": 0,
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:        "type": "bluestore"
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]:    }
Nov 22 00:57:21 np0005531754 bold_ishizaka[270553]: }
Nov 22 00:57:21 np0005531754 systemd[1]: libpod-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Deactivated successfully.
Nov 22 00:57:21 np0005531754 systemd[1]: libpod-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Consumed 1.079s CPU time.
Nov 22 00:57:21 np0005531754 podman[270536]: 2025-11-22 05:57:21.946176866 +0000 UTC m=+1.244917940 container died 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:57:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06-merged.mount: Deactivated successfully.
Nov 22 00:57:22 np0005531754 podman[270536]: 2025-11-22 05:57:22.008884226 +0000 UTC m=+1.307625260 container remove 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:57:22 np0005531754 systemd[1]: libpod-conmon-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Deactivated successfully.
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c67e547e-853b-4d9a-8ec6-3ee419c38d4c does not exist
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c969fc21-4322-474a-b177-448c2cb22dc7 does not exist
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 00:57:22 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 100 KiB/s wr, 70 op/s
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:22 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:24 np0005531754 nova_compute[255660]: 2025-11-22 05:57:24.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:24 np0005531754 nova_compute[255660]: 2025-11-22 05:57:24.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 00:57:25 np0005531754 nova_compute[255660]: 2025-11-22 05:57:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:25 np0005531754 nova_compute[255660]: 2025-11-22 05:57:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70f361da-7ed0-4639-8730-40afb694cc73, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70f361da-7ed0-4639-8730-40afb694cc73, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:25.517+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f361da-7ed0-4639-8730-40afb694cc73' of type subvolume
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f361da-7ed0-4639-8730-40afb694cc73' of type subvolume
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73'' moved to trashcan
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 00:57:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 00:57:26 np0005531754 nova_compute[255660]: 2025-11-22 05:57:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 00:57:27 np0005531754 nova_compute[255660]: 2025-11-22 05:57:27.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:57:27 np0005531754 nova_compute[255660]: 2025-11-22 05:57:27.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:57:27 np0005531754 nova_compute[255660]: 2025-11-22 05:57:27.132 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:57:27 np0005531754 nova_compute[255660]: 2025-11-22 05:57:27.348 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:57:28 np0005531754 podman[270652]: 2025-11-22 05:57:28.247510418 +0000 UTC m=+0.096679031 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:28 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 95 KiB/s wr, 52 op/s
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:29.114+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41dc1a04-6ad1-4773-8daa-7038ec6071c5' of type subvolume
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41dc1a04-6ad1-4773-8daa-7038ec6071c5' of type subvolume
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5'' moved to trashcan
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 00:57:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 00:57:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 00:57:29 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 00:57:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 00:57:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 00:57:30 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 00:57:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 92 KiB/s wr, 32 op/s
Nov 22 00:57:31 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "format": "json"}]: dispatch
Nov 22 00:57:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 95 KiB/s wr, 11 op/s
Nov 22 00:57:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9842ed03-cc84-4abe-85fe-b6107828690f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9842ed03-cc84-4abe-85fe-b6107828690f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:34.406+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9842ed03-cc84-4abe-85fe-b6107828690f' of type subvolume
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9842ed03-cc84-4abe-85fe-b6107828690f' of type subvolume
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f'' moved to trashcan
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 00:57:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 00:57:35 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:35 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:35.514 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:57:35 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:35.515 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:57:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 00:57:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:57:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:57:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:57:38 np0005531754 podman[270681]: 2025-11-22 05:57:38.260216503 +0000 UTC m=+0.097111842 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:57:38 np0005531754 podman[270680]: 2025-11-22 05:57:38.260742298 +0000 UTC m=+0.108433597 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 00:57:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:39 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:39.022+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1465b04-fd21-4d9f-bc84-54d95bef1ba1' of type subvolume
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1465b04-fd21-4d9f-bc84-54d95bef1ba1' of type subvolume
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1'' moved to trashcan
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:39 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 00:57:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 56 KiB/s wr, 6 op/s
Nov 22 00:57:42 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:57:42.518 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta.tmp'
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta.tmp' to config b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta'
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 22 00:57:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 22 00:57:42 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 22 00:57:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:57:43
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images']
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:57:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:57:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 00:57:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 00:57:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:57:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:57:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:57:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ea7dcce-e673-4744-867d-5b02c225beea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ea7dcce-e673-4744-867d-5b02c225beea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ea7dcce-e673-4744-867d-5b02c225beea' of type subvolume
Nov 22 00:57:47 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:47.353+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ea7dcce-e673-4744-867d-5b02c225beea' of type subvolume
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "force": true, "format": "json"}]: dispatch
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea'' moved to trashcan
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 00:57:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 00:57:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:51 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "format": "json"}]: dispatch
Nov 22 00:57:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:57:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000356807461581038 of space, bias 4.0, pg target 0.4281689538972456 quantized to 16 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:57:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:57:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 22 00:57:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 22 00:57:53 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:54 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:57:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "target_sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, target_sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 0fea4068-9bc1-4fcb-8da7-4bb427ba3d62 for path b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, target_sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9)
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9) -- by 0 seconds
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 00:57:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 00:57:57 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.mscchl(active, since 33m)
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.snap/396d061d-06cf-48da-a32e-5cf66e8782c8/e6393510-65ec-437b-80ea-ea4e82cad1d5' to b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/2d2bb832-daca-4d58-88f8-adb59d3125a8'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 0fea4068-9bc1-4fcb-8da7-4bb427ba3d62
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9)
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "format": "json"}]: dispatch
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:57:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:57:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 6 op/s
Nov 22 00:57:59 np0005531754 podman[270742]: 2025-11-22 05:57:59.265774198 +0000 UTC m=+0.127952930 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta.tmp'
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta.tmp' to config b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta'
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:57:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:57:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:57:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:00 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:00 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 7 op/s
Nov 22 00:58:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 00:58:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:58:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.499591) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082499615, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1510, "num_deletes": 252, "total_data_size": 1928815, "memory_usage": 1957088, "flush_reason": "Manual Compaction"}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082510011, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1907240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24592, "largest_seqno": 26101, "table_properties": {"data_size": 1900094, "index_size": 3964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17388, "raw_average_key_size": 21, "raw_value_size": 1884814, "raw_average_value_size": 2281, "num_data_blocks": 177, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790979, "oldest_key_time": 1763790979, "file_creation_time": 1763791082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10458 microseconds, and 4663 cpu microseconds.
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.510047) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1907240 bytes OK
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.510062) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511614) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511624) EVENT_LOG_v1 {"time_micros": 1763791082511621, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1921689, prev total WAL file size 1921689, number of live WAL files 2.
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.512291) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1862KB)], [56(9343KB)]
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082512357, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11474727, "oldest_snapshot_seqno": -1}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5523 keys, 9760514 bytes, temperature: kUnknown
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082559753, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 9760514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9720243, "index_size": 25377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 137488, "raw_average_key_size": 24, "raw_value_size": 9617867, "raw_average_value_size": 1741, "num_data_blocks": 1056, "num_entries": 5523, "num_filter_entries": 5523, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.560097) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9760514 bytes
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.561807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.5 rd, 205.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.1 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.1) write-amplify(5.1) OK, records in: 6050, records dropped: 527 output_compression: NoCompression
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.561845) EVENT_LOG_v1 {"time_micros": 1763791082561828, "job": 30, "event": "compaction_finished", "compaction_time_micros": 47511, "compaction_time_cpu_micros": 22242, "output_level": 6, "num_output_files": 1, "total_output_size": 9760514, "num_input_records": 6050, "num_output_records": 5523, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082562768, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082566307, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.512180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 00:58:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 64 KiB/s wr, 8 op/s
Nov 22 00:58:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "format": "json"}]: dispatch
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72cb8e63-63dd-4239-8be6-4c1b98b626ca' of type subvolume
Nov 22 00:58:03 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:03.679+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72cb8e63-63dd-4239-8be6-4c1b98b626ca' of type subvolume
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca'' moved to trashcan
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 00:58:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 533 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9'' moved to trashcan
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 00:58:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 53 KiB/s wr, 7 op/s
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta.tmp'
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta.tmp' to config b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta'
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:07 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 00:58:09 np0005531754 podman[270771]: 2025-11-22 05:58:09.197898822 +0000 UTC m=+0.053195567 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:58:09 np0005531754 podman[270770]: 2025-11-22 05:58:09.220835327 +0000 UTC m=+0.081488394 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 00:58:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 49 KiB/s wr, 5 op/s
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f93c12a6-84cd-4937-a909-48f837e88319, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f93c12a6-84cd-4937-a909-48f837e88319, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f93c12a6-84cd-4937-a909-48f837e88319' of type subvolume
Nov 22 00:58:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:11.348+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f93c12a6-84cd-4937-a909-48f837e88319' of type subvolume
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319'' moved to trashcan
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63' of type subvolume
Nov 22 00:58:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:11.754+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63' of type subvolume
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63'' moved to trashcan
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 86 KiB/s wr, 9 op/s
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3dcff5a7-454d-46f7-9ff8-546a79d1c07a' of type subvolume
Nov 22 00:58:12 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:12.903+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3dcff5a7-454d-46f7-9ff8-546a79d1c07a' of type subvolume
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a'' moved to trashcan
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 00:58:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 22 00:58:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 22 00:58:12 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 22 00:58:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta.tmp'
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta.tmp' to config b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta'
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.178 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.178 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:58:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:58:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1261161903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.610 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.781 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.783 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5062MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.783 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.784 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.856 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.856 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:58:15 np0005531754 nova_compute[255660]: 2025-11-22 05:58:15.880 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:58:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:58:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458841063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:58:16 np0005531754 nova_compute[255660]: 2025-11-22 05:58:16.330 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:58:16 np0005531754 nova_compute[255660]: 2025-11-22 05:58:16.337 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:58:16 np0005531754 nova_compute[255660]: 2025-11-22 05:58:16.355 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:58:16 np0005531754 nova_compute[255660]: 2025-11-22 05:58:16.357 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:58:16 np0005531754 nova_compute[255660]: 2025-11-22 05:58:16.358 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:58:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 00:58:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a58dddbc-e4f6-44cf-84c7-f24633017001, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a58dddbc-e4f6-44cf-84c7-f24633017001, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a58dddbc-e4f6-44cf-84c7-f24633017001' of type subvolume
Nov 22 00:58:18 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:18.740+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a58dddbc-e4f6-44cf-84c7-f24633017001' of type subvolume
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001'' moved to trashcan
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 00:58:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 8 op/s
Nov 22 00:58:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 7 op/s
Nov 22 00:58:21 np0005531754 nova_compute[255660]: 2025-11-22 05:58:21.359 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta.tmp'
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta.tmp' to config b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta'
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 00:58:23 np0005531754 nova_compute[255660]: 2025-11-22 05:58:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:23 np0005531754 nova_compute[255660]: 2025-11-22 05:58:23.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:58:23 np0005531754 podman[271027]: 2025-11-22 05:58:23.243155506 +0000 UTC m=+0.095601923 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:58:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 22 00:58:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 22 00:58:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 22 00:58:23 np0005531754 podman[271027]: 2025-11-22 05:58:23.362502984 +0000 UTC m=+0.214949341 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:58:24 np0005531754 nova_compute[255660]: 2025-11-22 05:58:24.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:58:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:58:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 00:58:25 np0005531754 nova_compute[255660]: 2025-11-22 05:58:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 07576dd4-5ba9-46c4-a21d-f1cca505943b does not exist
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 6d05f55d-355d-4418-ad8d-d66c7db25974 does not exist
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 960a49cd-547d-471a-9c39-183a460a66ae does not exist
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:25 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:25 np0005531754 podman[271458]: 2025-11-22 05:58:25.996863335 +0000 UTC m=+0.065138396 container create 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 00:58:26 np0005531754 systemd[1]: Started libpod-conmon-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope.
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:25.969829981 +0000 UTC m=+0.038105092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:26 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:26.112764171 +0000 UTC m=+0.181039282 container init 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:26.124295711 +0000 UTC m=+0.192570772 container start 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:26.128624847 +0000 UTC m=+0.196899918 container attach 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:58:26 np0005531754 nova_compute[255660]: 2025-11-22 05:58:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:26 np0005531754 nova_compute[255660]: 2025-11-22 05:58:26.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:26 np0005531754 sad_jang[271474]: 167 167
Nov 22 00:58:26 np0005531754 systemd[1]: libpod-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope: Deactivated successfully.
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:26.133306292 +0000 UTC m=+0.201581343 container died 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 00:58:26 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f3b32006c52d0c31ef2f75ce6e82b2b8ffbe5a4cee1ba3fdc1ceb3d42d5e208f-merged.mount: Deactivated successfully.
Nov 22 00:58:26 np0005531754 podman[271458]: 2025-11-22 05:58:26.196413383 +0000 UTC m=+0.264688414 container remove 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:58:26 np0005531754 systemd[1]: libpod-conmon-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope: Deactivated successfully.
Nov 22 00:58:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:58:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:26 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:35291e42-2480-4994-b801-7fa345608cde, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:35291e42-2480-4994-b801-7fa345608cde, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35291e42-2480-4994-b801-7fa345608cde' of type subvolume
Nov 22 00:58:26 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:26.427+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35291e42-2480-4994-b801-7fa345608cde' of type subvolume
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde'' moved to trashcan
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 00:58:26 np0005531754 podman[271498]: 2025-11-22 05:58:26.461574068 +0000 UTC m=+0.069929295 container create b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 00:58:26 np0005531754 systemd[1]: Started libpod-conmon-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope.
Nov 22 00:58:26 np0005531754 podman[271498]: 2025-11-22 05:58:26.430220078 +0000 UTC m=+0.038575335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:26 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:26 np0005531754 podman[271498]: 2025-11-22 05:58:26.564061735 +0000 UTC m=+0.172417052 container init b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:58:26 np0005531754 podman[271498]: 2025-11-22 05:58:26.581421179 +0000 UTC m=+0.189776426 container start b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 00:58:26 np0005531754 podman[271498]: 2025-11-22 05:58:26.587074111 +0000 UTC m=+0.195429368 container attach b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:58:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 54 KiB/s wr, 4 op/s
Nov 22 00:58:27 np0005531754 nova_compute[255660]: 2025-11-22 05:58:27.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:27 np0005531754 dreamy_pasteur[271514]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:58:27 np0005531754 dreamy_pasteur[271514]: --> relative data size: 1.0
Nov 22 00:58:27 np0005531754 dreamy_pasteur[271514]: --> All data devices are unavailable
Nov 22 00:58:27 np0005531754 systemd[1]: libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Deactivated successfully.
Nov 22 00:58:27 np0005531754 systemd[1]: libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Consumed 1.122s CPU time.
Nov 22 00:58:27 np0005531754 conmon[271514]: conmon b91f3682e8518d49b133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope/container/memory.events
Nov 22 00:58:27 np0005531754 podman[271543]: 2025-11-22 05:58:27.807074182 +0000 UTC m=+0.033033905 container died b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:58:27 np0005531754 systemd[1]: var-lib-containers-storage-overlay-8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57-merged.mount: Deactivated successfully.
Nov 22 00:58:27 np0005531754 podman[271543]: 2025-11-22 05:58:27.879719519 +0000 UTC m=+0.105679162 container remove b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 00:58:27 np0005531754 systemd[1]: libpod-conmon-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Deactivated successfully.
Nov 22 00:58:28 np0005531754 nova_compute[255660]: 2025-11-22 05:58:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:58:28 np0005531754 nova_compute[255660]: 2025-11-22 05:58:28.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:58:28 np0005531754 nova_compute[255660]: 2025-11-22 05:58:28.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:58:28 np0005531754 nova_compute[255660]: 2025-11-22 05:58:28.157 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:58:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:28 np0005531754 podman[271699]: 2025-11-22 05:58:28.709650468 +0000 UTC m=+0.039084158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:28 np0005531754 podman[271699]: 2025-11-22 05:58:28.828562625 +0000 UTC m=+0.157996325 container create b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 00:58:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 00:58:29 np0005531754 systemd[1]: Started libpod-conmon-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope.
Nov 22 00:58:29 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:29 np0005531754 podman[271699]: 2025-11-22 05:58:29.180964908 +0000 UTC m=+0.510398658 container init b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 00:58:29 np0005531754 podman[271699]: 2025-11-22 05:58:29.193112793 +0000 UTC m=+0.522546493 container start b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:58:29 np0005531754 clever_lehmann[271715]: 167 167
Nov 22 00:58:29 np0005531754 systemd[1]: libpod-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope: Deactivated successfully.
Nov 22 00:58:29 np0005531754 podman[271699]: 2025-11-22 05:58:29.232624462 +0000 UTC m=+0.562058172 container attach b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 00:58:29 np0005531754 podman[271699]: 2025-11-22 05:58:29.233098835 +0000 UTC m=+0.562532545 container died b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:58:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-808e485f8e135435b3d7822bc887ba800eb57ed95bf45e1e16136376f4a36f6c-merged.mount: Deactivated successfully.
Nov 22 00:58:29 np0005531754 podman[271699]: 2025-11-22 05:58:29.57324668 +0000 UTC m=+0.902680380 container remove b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:58:29 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "format": "json"}]: dispatch
Nov 22 00:58:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:29 np0005531754 systemd[1]: libpod-conmon-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope: Deactivated successfully.
Nov 22 00:58:29 np0005531754 podman[271732]: 2025-11-22 05:58:29.701368652 +0000 UTC m=+0.359462542 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 00:58:29 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:29 np0005531754 podman[271766]: 2025-11-22 05:58:29.820157086 +0000 UTC m=+0.048403698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:29 np0005531754 podman[271766]: 2025-11-22 05:58:29.9206866 +0000 UTC m=+0.148933152 container create d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:58:30 np0005531754 systemd[1]: Started libpod-conmon-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope.
Nov 22 00:58:30 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:30 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:30 np0005531754 podman[271766]: 2025-11-22 05:58:30.139201875 +0000 UTC m=+0.367448467 container init d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:58:30 np0005531754 podman[271766]: 2025-11-22 05:58:30.151625458 +0000 UTC m=+0.379872010 container start d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:30 np0005531754 podman[271766]: 2025-11-22 05:58:30.329387372 +0000 UTC m=+0.557633924 container attach d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta.tmp'
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta.tmp' to config b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta'
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:30 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:30 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 00:58:30 np0005531754 hungry_nash[271783]: {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    "0": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "devices": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "/dev/loop3"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            ],
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_name": "ceph_lv0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_size": "21470642176",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "name": "ceph_lv0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "tags": {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_name": "ceph",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.crush_device_class": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.encrypted": "0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_id": "0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.vdo": "0"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            },
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "vg_name": "ceph_vg0"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        }
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    ],
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    "1": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "devices": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "/dev/loop4"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            ],
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_name": "ceph_lv1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_size": "21470642176",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "name": "ceph_lv1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "tags": {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_name": "ceph",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.crush_device_class": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.encrypted": "0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_id": "1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.vdo": "0"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            },
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "vg_name": "ceph_vg1"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        }
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    ],
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    "2": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "devices": [
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "/dev/loop5"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            ],
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_name": "ceph_lv2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_size": "21470642176",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "name": "ceph_lv2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "tags": {
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.cluster_name": "ceph",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.crush_device_class": "",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.encrypted": "0",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osd_id": "2",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:                "ceph.vdo": "0"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            },
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "type": "block",
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:            "vg_name": "ceph_vg2"
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:        }
Nov 22 00:58:30 np0005531754 hungry_nash[271783]:    ]
Nov 22 00:58:30 np0005531754 hungry_nash[271783]: }
Nov 22 00:58:30 np0005531754 systemd[1]: libpod-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope: Deactivated successfully.
Nov 22 00:58:30 np0005531754 podman[271766]: 2025-11-22 05:58:30.92382073 +0000 UTC m=+1.152067252 container died d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:58:30 np0005531754 systemd[1]: var-lib-containers-storage-overlay-48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e-merged.mount: Deactivated successfully.
Nov 22 00:58:30 np0005531754 podman[271766]: 2025-11-22 05:58:30.997628838 +0000 UTC m=+1.225875360 container remove d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 00:58:31 np0005531754 systemd[1]: libpod-conmon-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope: Deactivated successfully.
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.796590867 +0000 UTC m=+0.058654232 container create e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:58:31 np0005531754 systemd[1]: Started libpod-conmon-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope.
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.767673532 +0000 UTC m=+0.029736957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.885056648 +0000 UTC m=+0.147120073 container init e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.892022965 +0000 UTC m=+0.154086300 container start e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:58:31 np0005531754 wonderful_mendeleev[271966]: 167 167
Nov 22 00:58:31 np0005531754 systemd[1]: libpod-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope: Deactivated successfully.
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.896208597 +0000 UTC m=+0.158271982 container attach e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.896937486 +0000 UTC m=+0.159000841 container died e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 00:58:31 np0005531754 systemd[1]: var-lib-containers-storage-overlay-52f7385ab9750b2f12f27bfedf778df673f28c7b967a28ce045e9efe460a239b-merged.mount: Deactivated successfully.
Nov 22 00:58:31 np0005531754 podman[271949]: 2025-11-22 05:58:31.932611532 +0000 UTC m=+0.194674887 container remove e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 00:58:31 np0005531754 systemd[1]: libpod-conmon-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope: Deactivated successfully.
Nov 22 00:58:32 np0005531754 podman[271990]: 2025-11-22 05:58:32.12211216 +0000 UTC m=+0.048365167 container create 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 00:58:32 np0005531754 systemd[1]: Started libpod-conmon-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope.
Nov 22 00:58:32 np0005531754 podman[271990]: 2025-11-22 05:58:32.103873551 +0000 UTC m=+0.030126578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:58:32 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:58:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:32 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:58:32 np0005531754 podman[271990]: 2025-11-22 05:58:32.22957271 +0000 UTC m=+0.155825747 container init 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 00:58:32 np0005531754 podman[271990]: 2025-11-22 05:58:32.245851636 +0000 UTC m=+0.172104673 container start 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:58:32 np0005531754 podman[271990]: 2025-11-22 05:58:32.250299565 +0000 UTC m=+0.176552572 container attach 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:58:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 3 op/s
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]: {
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_id": 1,
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "type": "bluestore"
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    },
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_id": 2,
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "type": "bluestore"
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    },
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_id": 0,
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:        "type": "bluestore"
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]:    }
Nov 22 00:58:33 np0005531754 affectionate_lumiere[272006]: }
Nov 22 00:58:33 np0005531754 systemd[1]: libpod-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Deactivated successfully.
Nov 22 00:58:33 np0005531754 systemd[1]: libpod-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Consumed 1.098s CPU time.
Nov 22 00:58:33 np0005531754 podman[271990]: 2025-11-22 05:58:33.331038225 +0000 UTC m=+1.257291232 container died 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:58:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:33 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef-merged.mount: Deactivated successfully.
Nov 22 00:58:33 np0005531754 podman[271990]: 2025-11-22 05:58:33.407990527 +0000 UTC m=+1.334243544 container remove 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:58:33 np0005531754 systemd[1]: libpod-conmon-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Deactivated successfully.
Nov 22 00:58:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:58:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:58:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 730cb392-55db-45a0-83f0-eef602fe1fc3 does not exist
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 481fdc55-019a-453f-8d5b-7ce03c6b76af does not exist
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 00:58:33 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074d9098-d04c-45ea-9d9a-2dcbe0a4b326' of type subvolume
Nov 22 00:58:34 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:34.405+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074d9098-d04c-45ea-9d9a-2dcbe0a4b326' of type subvolume
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326'' moved to trashcan
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 00:58:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 41 KiB/s wr, 3 op/s
Nov 22 00:58:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.779 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:58:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.780 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:58:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 00:58:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.938 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:58:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:58:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:04ec723f-2266-44ad-8738-9d300104eaa9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:04ec723f-2266-44ad-8738-9d300104eaa9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:37 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:37.090+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04ec723f-2266-44ad-8738-9d300104eaa9' of type subvolume
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04ec723f-2266-44ad-8738-9d300104eaa9' of type subvolume
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9'' moved to trashcan
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 00:58:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 22 00:58:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 22 00:58:37 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 22 00:58:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 00:58:40 np0005531754 podman[272102]: 2025-11-22 05:58:40.210519709 +0000 UTC m=+0.065315001 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 00:58:40 np0005531754 podman[272103]: 2025-11-22 05:58:40.212930354 +0000 UTC m=+0.065697911 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:40 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:40 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 00:58:41 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:58:41.783 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6262f914-71c2-4411-a49e-54f30a05659d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6262f914-71c2-4411-a49e-54f30a05659d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6262f914-71c2-4411-a49e-54f30a05659d' of type subvolume
Nov 22 00:58:42 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:42.334+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6262f914-71c2-4411-a49e-54f30a05659d' of type subvolume
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d'' moved to trashcan
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 00:58:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 6 op/s
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 22 00:58:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:58:43
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:58:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "format": "json"}]: dispatch
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 57 KiB/s wr, 4 op/s
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:45 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:58:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:58:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 00:58:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:58:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:58:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:58:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:58:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "format": "json"}]: dispatch
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 00:58:48 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "target_sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, target_sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 1a33974a-a90d-4c47-97c4-c31d41cbeceb for path b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, target_sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137)
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 00:58:49 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 3 op/s
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137) -- by 0 seconds
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:12681db0-dac5-4be1-a94e-db0502d683a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 415 B/s rd, 60 KiB/s wr, 6 op/s
Nov 22 00:58:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 22 00:58:52 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 22 00:58:52 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000414865777821189 of space, bias 4.0, pg target 0.49783893338542684 quantized to 16 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:58:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:58:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 22 00:58:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 22 00:58:53 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 22 00:58:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 7 op/s
Nov 22 00:58:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 43 KiB/s wr, 4 op/s
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:12681db0-dac5-4be1-a94e-db0502d683a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12681db0-dac5-4be1-a94e-db0502d683a8' of type subvolume
Nov 22 00:58:57 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:57.428+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12681db0-dac5-4be1-a94e-db0502d683a8' of type subvolume
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "force": true, "format": "json"}]: dispatch
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.snap/7cb6a540-aa78-41e5-b112-51878416b681/2856a001-7e16-4367-8d2d-8c670740b800' to b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/b0c60bd9-e586-4f2c-ae51-cb9345b16ccf'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8'' moved to trashcan
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 1a33974a-a90d-4c47-97c4-c31d41cbeceb
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 00:58:57 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137)
Nov 22 00:58:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:58:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 00:59:00 np0005531754 podman[272141]: 2025-11-22 05:59:00.291645898 +0000 UTC m=+0.143851835 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 00:59:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 00:59:00 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 49 KiB/s wr, 3 op/s
Nov 22 00:59:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:02 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 00:59:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:59:02 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:59:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 723 B/s rd, 47 KiB/s wr, 4 op/s
Nov 22 00:59:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 22 00:59:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 22 00:59:03 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 22 00:59:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta.tmp'
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta.tmp' to config b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta'
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:05 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:05 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:05 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 00:59:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 00:59:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:09 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:10 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "format": "json"}]: dispatch
Nov 22 00:59:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:10 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 00:59:11 np0005531754 podman[272167]: 2025-11-22 05:59:11.232985148 +0000 UTC m=+0.072840983 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 00:59:11 np0005531754 podman[272168]: 2025-11-22 05:59:11.240302874 +0000 UTC m=+0.077913650 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 00:59:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "format": "json"}]: dispatch
Nov 22 00:59:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:11 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea42c00d-c230-4795-b72e-34001c4be0a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea42c00d-c230-4795-b72e-34001c4be0a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:12 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:12.653+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea42c00d-c230-4795-b72e-34001c4be0a8' of type subvolume
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea42c00d-c230-4795-b72e-34001c4be0a8' of type subvolume
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8'' moved to trashcan
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 00:59:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 4 op/s
Nov 22 00:59:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "format": "json"}]: dispatch
Nov 22 00:59:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:14 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.167 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.167 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:59:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:59:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199808711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.704 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.890 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.892 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5079MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.893 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.946 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.947 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 00:59:15 np0005531754 nova_compute[255660]: 2025-11-22 05:59:15.965 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 00:59:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 00:59:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345386042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 00:59:16 np0005531754 nova_compute[255660]: 2025-11-22 05:59:16.392 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 00:59:16 np0005531754 nova_compute[255660]: 2025-11-22 05:59:16.397 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 00:59:16 np0005531754 nova_compute[255660]: 2025-11-22 05:59:16.412 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 00:59:16 np0005531754 nova_compute[255660]: 2025-11-22 05:59:16.413 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 00:59:16 np0005531754 nova_compute[255660]: 2025-11-22 05:59:16.413 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 3 op/s
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 83 KiB/s wr, 5 op/s
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:20 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:20.100+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f58a5d10-062f-4cf1-87a0-845f4b3226dc' of type subvolume
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f58a5d10-062f-4cf1-87a0-845f4b3226dc' of type subvolume
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc'' moved to trashcan
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta.tmp'
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta.tmp' to config b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta'
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "format": "json"}]: dispatch
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:21 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 00:59:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 22 00:59:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 22 00:59:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 22 00:59:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:23 np0005531754 nova_compute[255660]: 2025-11-22 05:59:23.409 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:23 np0005531754 nova_compute[255660]: 2025-11-22 05:59:23.441 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:24 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 22 00:59:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:24 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 00:59:25 np0005531754 nova_compute[255660]: 2025-11-22 05:59:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:25 np0005531754 nova_compute[255660]: 2025-11-22 05:59:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:25 np0005531754 nova_compute[255660]: 2025-11-22 05:59:25.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 00:59:26 np0005531754 nova_compute[255660]: 2025-11-22 05:59:26.126 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:26 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:27 np0005531754 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:27 np0005531754 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:27 np0005531754 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:28 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:28.105+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d' of type subvolume
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d' of type subvolume
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d'' moved to trashcan
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 00:59:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 00:59:30 np0005531754 nova_compute[255660]: 2025-11-22 05:59:30.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 00:59:30 np0005531754 nova_compute[255660]: 2025-11-22 05:59:30.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 00:59:30 np0005531754 nova_compute[255660]: 2025-11-22 05:59:30.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 00:59:30 np0005531754 nova_compute[255660]: 2025-11-22 05:59:30.150 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 00:59:30 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "format": "json"}]: dispatch
Nov 22 00:59:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:30 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 00:59:31 np0005531754 podman[272249]: 2025-11-22 05:59:31.314214138 +0000 UTC m=+0.170835999 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta.tmp'
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta.tmp' to config b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta'
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:31 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:31 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:31 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 74 KiB/s wr, 4 op/s
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 22 00:59:33 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev dfbffaf2-fc72-4ebb-9d8f-fdc410a1b86b does not exist
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev aa3c5b55-6c36-4eeb-902c-1f2a9026b532 does not exist
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 1c525ea3-d11e-4374-8382-fd43b58dc835 does not exist
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 00:59:34 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 00:59:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 5 op/s
Nov 22 00:59:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 00:59:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:35 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.326791361 +0000 UTC m=+0.056432994 container create bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:59:35 np0005531754 systemd[1]: Started libpod-conmon-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope.
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.297360342 +0000 UTC m=+0.027002055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.429051571 +0000 UTC m=+0.158693294 container init bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.438458813 +0000 UTC m=+0.168100486 container start bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.44317907 +0000 UTC m=+0.172820743 container attach bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:35 np0005531754 funny_zhukovsky[272564]: 167 167
Nov 22 00:59:35 np0005531754 systemd[1]: libpod-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope: Deactivated successfully.
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.446252552 +0000 UTC m=+0.175894195 container died bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 00:59:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-59840e9aa668b49b4f7bd6c40535dd95c4218b2dfd5017620c6caddfb6a2eb21-merged.mount: Deactivated successfully.
Nov 22 00:59:35 np0005531754 podman[272547]: 2025-11-22 05:59:35.499319464 +0000 UTC m=+0.228961097 container remove bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:59:35 np0005531754 systemd[1]: libpod-conmon-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope: Deactivated successfully.
Nov 22 00:59:35 np0005531754 podman[272587]: 2025-11-22 05:59:35.713848422 +0000 UTC m=+0.042881009 container create a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:59:35 np0005531754 systemd[1]: Started libpod-conmon-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope.
Nov 22 00:59:35 np0005531754 podman[272587]: 2025-11-22 05:59:35.698939543 +0000 UTC m=+0.027972150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:35 np0005531754 podman[272587]: 2025-11-22 05:59:35.841043641 +0000 UTC m=+0.170076268 container init a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:59:35 np0005531754 podman[272587]: 2025-11-22 05:59:35.855631421 +0000 UTC m=+0.184664048 container start a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:59:35 np0005531754 podman[272587]: 2025-11-22 05:59:35.860683457 +0000 UTC m=+0.189716054 container attach a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:36 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:36.685+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe5f4c39-d36f-406f-9522-4233e36c1e1d' of type subvolume
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe5f4c39-d36f-406f-9522-4233e36c1e1d' of type subvolume
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d'' moved to trashcan
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 00:59:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 3 op/s
Nov 22 00:59:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 00:59:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 00:59:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 00:59:36 np0005531754 hungry_burnell[272604]: --> passed data devices: 0 physical, 3 LVM
Nov 22 00:59:36 np0005531754 hungry_burnell[272604]: --> relative data size: 1.0
Nov 22 00:59:36 np0005531754 hungry_burnell[272604]: --> All data devices are unavailable
Nov 22 00:59:37 np0005531754 systemd[1]: libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Deactivated successfully.
Nov 22 00:59:37 np0005531754 systemd[1]: libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Consumed 1.098s CPU time.
Nov 22 00:59:37 np0005531754 conmon[272604]: conmon a0d36d375d42023e267d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope/container/memory.events
Nov 22 00:59:37 np0005531754 podman[272587]: 2025-11-22 05:59:37.006136361 +0000 UTC m=+1.335168988 container died a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 00:59:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e-merged.mount: Deactivated successfully.
Nov 22 00:59:37 np0005531754 podman[272587]: 2025-11-22 05:59:37.068903323 +0000 UTC m=+1.397935910 container remove a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 00:59:37 np0005531754 systemd[1]: libpod-conmon-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Deactivated successfully.
Nov 22 00:59:37 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "format": "json"}]: dispatch
Nov 22 00:59:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:37 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:37 np0005531754 podman[272785]: 2025-11-22 05:59:37.953187959 +0000 UTC m=+0.070204453 container create 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 00:59:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 22 00:59:37 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 22 00:59:37 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 22 00:59:37 np0005531754 systemd[1]: Started libpod-conmon-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope.
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:37.919714632 +0000 UTC m=+0.036731116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:38.053105936 +0000 UTC m=+0.170122430 container init 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:38.060705899 +0000 UTC m=+0.177722333 container start 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:38.064237905 +0000 UTC m=+0.181254319 container attach 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 00:59:38 np0005531754 nervous_tu[272801]: 167 167
Nov 22 00:59:38 np0005531754 systemd[1]: libpod-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope: Deactivated successfully.
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:38.067904073 +0000 UTC m=+0.184920507 container died 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 00:59:38 np0005531754 systemd[1]: var-lib-containers-storage-overlay-dea0ffd43520c2ca91b43d59b4ccca942c2936532a09864ab645ad81f59ce860-merged.mount: Deactivated successfully.
Nov 22 00:59:38 np0005531754 podman[272785]: 2025-11-22 05:59:38.114574313 +0000 UTC m=+0.231590737 container remove 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:38 np0005531754 systemd[1]: libpod-conmon-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope: Deactivated successfully.
Nov 22 00:59:38 np0005531754 podman[272827]: 2025-11-22 05:59:38.341653278 +0000 UTC m=+0.052714474 container create 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:38 np0005531754 systemd[1]: Started libpod-conmon-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope.
Nov 22 00:59:38 np0005531754 podman[272827]: 2025-11-22 05:59:38.317900032 +0000 UTC m=+0.028961048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:38 np0005531754 podman[272827]: 2025-11-22 05:59:38.457826521 +0000 UTC m=+0.168887517 container init 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:59:38 np0005531754 podman[272827]: 2025-11-22 05:59:38.474714534 +0000 UTC m=+0.185775490 container start 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:38 np0005531754 podman[272827]: 2025-11-22 05:59:38.480895169 +0000 UTC m=+0.191956165 container attach 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 00:59:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 99 KiB/s wr, 4 op/s
Nov 22 00:59:39 np0005531754 angry_pare[272843]: {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    "0": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "devices": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "/dev/loop3"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            ],
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_name": "ceph_lv0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_size": "21470642176",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "name": "ceph_lv0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "tags": {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_name": "ceph",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.crush_device_class": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.encrypted": "0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_id": "0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.vdo": "0"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            },
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "vg_name": "ceph_vg0"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        }
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    ],
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    "1": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "devices": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "/dev/loop4"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            ],
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_name": "ceph_lv1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_size": "21470642176",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "name": "ceph_lv1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "tags": {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_name": "ceph",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.crush_device_class": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.encrypted": "0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_id": "1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.vdo": "0"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            },
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "vg_name": "ceph_vg1"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        }
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    ],
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    "2": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "devices": [
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "/dev/loop5"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            ],
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_name": "ceph_lv2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_size": "21470642176",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "name": "ceph_lv2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "tags": {
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cephx_lockbox_secret": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.cluster_name": "ceph",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.crush_device_class": "",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.encrypted": "0",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osd_id": "2",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:                "ceph.vdo": "0"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            },
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "type": "block",
Nov 22 00:59:39 np0005531754 angry_pare[272843]:            "vg_name": "ceph_vg2"
Nov 22 00:59:39 np0005531754 angry_pare[272843]:        }
Nov 22 00:59:39 np0005531754 angry_pare[272843]:    ]
Nov 22 00:59:39 np0005531754 angry_pare[272843]: }
Nov 22 00:59:39 np0005531754 systemd[1]: libpod-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope: Deactivated successfully.
Nov 22 00:59:39 np0005531754 podman[272827]: 2025-11-22 05:59:39.266337656 +0000 UTC m=+0.977398642 container died 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 00:59:39 np0005531754 systemd[1]: var-lib-containers-storage-overlay-be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c-merged.mount: Deactivated successfully.
Nov 22 00:59:39 np0005531754 podman[272827]: 2025-11-22 05:59:39.342716523 +0000 UTC m=+1.053777479 container remove 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:59:39 np0005531754 systemd[1]: libpod-conmon-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope: Deactivated successfully.
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.111864963 +0000 UTC m=+0.040372233 container create 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 00:59:40 np0005531754 systemd[1]: Started libpod-conmon-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope.
Nov 22 00:59:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.095305019 +0000 UTC m=+0.023812309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.195957757 +0000 UTC m=+0.124465047 container init 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137'' moved to trashcan
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.211273377 +0000 UTC m=+0.139780687 container start 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.215675465 +0000 UTC m=+0.144182765 container attach 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 00:59:40 np0005531754 boring_moser[273025]: 167 167
Nov 22 00:59:40 np0005531754 systemd[1]: libpod-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope: Deactivated successfully.
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.219505718 +0000 UTC m=+0.148013058 container died 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 00:59:40 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b57ee495d9acd81b8e79c7a774fca2453734da5beeeae4e756c4639f6df91d04-merged.mount: Deactivated successfully.
Nov 22 00:59:40 np0005531754 podman[273008]: 2025-11-22 05:59:40.260889997 +0000 UTC m=+0.189397267 container remove 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 00:59:40 np0005531754 systemd[1]: libpod-conmon-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope: Deactivated successfully.
Nov 22 00:59:40 np0005531754 podman[273049]: 2025-11-22 05:59:40.465390577 +0000 UTC m=+0.071186379 container create f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 00:59:40 np0005531754 systemd[1]: Started libpod-conmon-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope.
Nov 22 00:59:40 np0005531754 podman[273049]: 2025-11-22 05:59:40.432047713 +0000 UTC m=+0.037843575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 00:59:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 00:59:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 00:59:40 np0005531754 podman[273049]: 2025-11-22 05:59:40.579615207 +0000 UTC m=+0.185411059 container init f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:40 np0005531754 podman[273049]: 2025-11-22 05:59:40.59537427 +0000 UTC m=+0.201170032 container start f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 00:59:40 np0005531754 podman[273049]: 2025-11-22 05:59:40.598845863 +0000 UTC m=+0.204641715 container attach f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 00:59:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 76 KiB/s wr, 3 op/s
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]: {
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_id": 1,
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "type": "bluestore"
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    },
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_id": 2,
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "type": "bluestore"
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    },
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_id": 0,
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:        "type": "bluestore"
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]:    }
Nov 22 00:59:41 np0005531754 blissful_varahamihira[273065]: }
Nov 22 00:59:41 np0005531754 systemd[1]: libpod-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Deactivated successfully.
Nov 22 00:59:41 np0005531754 podman[273049]: 2025-11-22 05:59:41.785953403 +0000 UTC m=+1.391749165 container died f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 00:59:41 np0005531754 systemd[1]: libpod-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Consumed 1.196s CPU time.
Nov 22 00:59:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92-merged.mount: Deactivated successfully.
Nov 22 00:59:41 np0005531754 podman[273049]: 2025-11-22 05:59:41.845052467 +0000 UTC m=+1.450848229 container remove f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 00:59:41 np0005531754 systemd[1]: libpod-conmon-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Deactivated successfully.
Nov 22 00:59:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 00:59:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 00:59:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e6238643-41fe-4f73-99b6-f536a9f8405d does not exist
Nov 22 00:59:41 np0005531754 podman[273098]: 2025-11-22 05:59:41.90788435 +0000 UTC m=+0.080000144 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 00:59:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9f8757b8-6b04-49ba-9168-da034a39a02b does not exist
Nov 22 00:59:41 np0005531754 podman[273107]: 2025-11-22 05:59:41.926510669 +0000 UTC m=+0.098532741 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 00:59:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 81 KiB/s wr, 5 op/s
Nov 22 00:59:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 22 00:59:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 22 00:59:43 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:59:43
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'backups']
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 00:59:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 00:59:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 96 KiB/s wr, 5 op/s
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 344 B/s rd, 20 KiB/s wr, 2 op/s
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:46 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:46.972+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ea650d4-0ea6-408a-8107-7d06795baf3e' of type subvolume
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ea650d4-0ea6-408a-8107-7d06795baf3e' of type subvolume
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e'' moved to trashcan
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:46 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 00:59:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 00:59:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 00:59:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 00:59:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 00:59:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "format": "json"}]: dispatch
Nov 22 00:59:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:47 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 22 00:59:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 22 00:59:48 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 22 00:59:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 00:59:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 70 KiB/s wr, 5 op/s
Nov 22 00:59:52 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:52.381 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 00:59:52 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:52.382 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 69 KiB/s wr, 5 op/s
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:52 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00047731185290095723 of space, bias 4.0, pg target 0.5727742234811487 quantized to 16 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 00:59:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 00:59:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 22 00:59:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 22 00:59:53 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 22 00:59:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 81 KiB/s wr, 6 op/s
Nov 22 00:59:56 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 05:59:56.384 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 11 KiB/s wr, 1 op/s
Nov 22 00:59:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 22 00:59:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 22 00:59:58 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 22 00:59:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 00:59:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 00:59:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 22 00:59:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 22 00:59:59 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 00:59:59 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:59.981+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18e9f280-7994-4c49-95f7-6a6f9ebabd4f' of type subvolume
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18e9f280-7994-4c49-95f7-6a6f9ebabd4f' of type subvolume
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "force": true, "format": "json"}]: dispatch
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f'' moved to trashcan
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 00:59:59 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 01:00:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 01:00:01 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 01:00:02 np0005531754 podman[273198]: 2025-11-22 06:00:02.225270542 +0000 UTC m=+0.085895173 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 01:00:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 6 op/s
Nov 22 01:00:03 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "format": "json"}]: dispatch
Nov 22 01:00:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:03 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 22 01:00:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 22 01:00:03 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 22 01:00:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 64 KiB/s wr, 4 op/s
Nov 22 01:00:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3", "force": true, "format": "json"}]: dispatch
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "force": true, "format": "json"}]: dispatch
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 418 B/s rd, 62 KiB/s wr, 4 op/s
Nov 22 01:00:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 60 KiB/s wr, 3 op/s
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 01:00:12 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:00:12.077+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a01560f-b8db-4a3a-8f6c-493d0f32d091' of type subvolume
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a01560f-b8db-4a3a-8f6c-493d0f32d091' of type subvolume
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "force": true, "format": "json"}]: dispatch
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091'' moved to trashcan
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 01:00:12 np0005531754 podman[273226]: 2025-11-22 06:00:12.210726506 +0000 UTC m=+0.070635194 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 01:00:12 np0005531754 podman[273227]: 2025-11-22 06:00:12.214080865 +0000 UTC m=+0.073795768 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 01:00:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 01:00:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 22 01:00:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 22 01:00:13 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 22 01:00:13 np0005531754 nova_compute[255660]: 2025-11-22 06:00:13.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:13 np0005531754 nova_compute[255660]: 2025-11-22 06:00:13.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 01:00:13 np0005531754 nova_compute[255660]: 2025-11-22 06:00:13.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 01:00:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b778070>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b778c10>)]
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 01:00:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 01:00:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 01:00:15 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.mscchl(active, since 35m)
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.153 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.188 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.190 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:00:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:00:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145969244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.647 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.861 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:00:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.930 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.931 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:00:16 np0005531754 nova_compute[255660]: 2025-11-22 06:00:16.959 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:00:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:00:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72605276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:00:17 np0005531754 nova_compute[255660]: 2025-11-22 06:00:17.442 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:00:17 np0005531754 nova_compute[255660]: 2025-11-22 06:00:17.449 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:00:17 np0005531754 nova_compute[255660]: 2025-11-22 06:00:17.480 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:00:17 np0005531754 nova_compute[255660]: 2025-11-22 06:00:17.482 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:00:17 np0005531754 nova_compute[255660]: 2025-11-22 06:00:17.483 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:00:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Nov 22 01:00:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 3 op/s
Nov 22 01:00:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 01:00:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 22 01:00:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 22 01:00:23 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 22 01:00:24 np0005531754 nova_compute[255660]: 2025-11-22 06:00:24.458 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 01:00:26 np0005531754 nova_compute[255660]: 2025-11-22 06:00:26.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:26 np0005531754 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:26 np0005531754 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:26 np0005531754 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:00:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 1 op/s
Nov 22 01:00:28 np0005531754 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:28 np0005531754 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:28 np0005531754 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 01:00:29 np0005531754 nova_compute[255660]: 2025-11-22 06:00:29.144 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:29 np0005531754 nova_compute[255660]: 2025-11-22 06:00:29.145 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:29 np0005531754 nova_compute[255660]: 2025-11-22 06:00:29.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 01:00:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 01:00:31 np0005531754 nova_compute[255660]: 2025-11-22 06:00:31.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:00:31 np0005531754 nova_compute[255660]: 2025-11-22 06:00:31.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:00:31 np0005531754 nova_compute[255660]: 2025-11-22 06:00:31.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:00:31 np0005531754 nova_compute[255660]: 2025-11-22 06:00:31.164 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:00:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 22 01:00:33 np0005531754 podman[273306]: 2025-11-22 06:00:33.104762581 +0000 UTC m=+0.130880148 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 01:00:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s wr, 0 op/s
Nov 22 01:00:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 01:00:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:00:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:00:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:00:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 01:00:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:42 np0005531754 podman[273383]: 2025-11-22 06:00:42.423495229 +0000 UTC m=+0.115044284 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 01:00:42 np0005531754 podman[273384]: 2025-11-22 06:00:42.424000652 +0000 UTC m=+0.116548924 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 01:00:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c92dfd75-1d82-4223-b294-3b3d75830d90 does not exist
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev cf4c472c-a662-4263-8c64-b19a95745a5f does not exist
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev b1f70152-7188-4c6e-af59-0c7b8df006de does not exist
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:00:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:00:43
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.862259133 +0000 UTC m=+0.057330368 container create 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:00:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:00:43 np0005531754 systemd[1]: Started libpod-conmon-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope.
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.841331722 +0000 UTC m=+0.036402977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:43 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.962547939 +0000 UTC m=+0.157619234 container init 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.969086245 +0000 UTC m=+0.164157480 container start 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.972739313 +0000 UTC m=+0.167810638 container attach 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:00:43 np0005531754 beautiful_banzai[273665]: 167 167
Nov 22 01:00:43 np0005531754 systemd[1]: libpod-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope: Deactivated successfully.
Nov 22 01:00:43 np0005531754 conmon[273665]: conmon 0ab9d8809f7ebbf9cb3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope/container/memory.events
Nov 22 01:00:43 np0005531754 podman[273648]: 2025-11-22 06:00:43.976245447 +0000 UTC m=+0.171316712 container died 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:00:44 np0005531754 systemd[1]: var-lib-containers-storage-overlay-87d56529a879a6a9c208d08528200082d980b8cdb8b6f4c2fc26ee0e1e3fed67-merged.mount: Deactivated successfully.
Nov 22 01:00:44 np0005531754 podman[273648]: 2025-11-22 06:00:44.031909338 +0000 UTC m=+0.226980583 container remove 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 01:00:44 np0005531754 systemd[1]: libpod-conmon-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope: Deactivated successfully.
Nov 22 01:00:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:00:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:44 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:00:44 np0005531754 podman[273688]: 2025-11-22 06:00:44.248157453 +0000 UTC m=+0.056569087 container create 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 01:00:44 np0005531754 systemd[1]: Started libpod-conmon-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope.
Nov 22 01:00:44 np0005531754 podman[273688]: 2025-11-22 06:00:44.229095973 +0000 UTC m=+0.037507627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:44 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:44 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:44 np0005531754 podman[273688]: 2025-11-22 06:00:44.345036529 +0000 UTC m=+0.153448233 container init 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:00:44 np0005531754 podman[273688]: 2025-11-22 06:00:44.360225866 +0000 UTC m=+0.168637510 container start 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 01:00:44 np0005531754 podman[273688]: 2025-11-22 06:00:44.364815909 +0000 UTC m=+0.173227563 container attach 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 01:00:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:45 np0005531754 quizzical_banach[273705]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:00:45 np0005531754 quizzical_banach[273705]: --> relative data size: 1.0
Nov 22 01:00:45 np0005531754 quizzical_banach[273705]: --> All data devices are unavailable
Nov 22 01:00:45 np0005531754 systemd[1]: libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Deactivated successfully.
Nov 22 01:00:45 np0005531754 systemd[1]: libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Consumed 1.109s CPU time.
Nov 22 01:00:45 np0005531754 conmon[273705]: conmon 3aaccf2111adfda7ba2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope/container/memory.events
Nov 22 01:00:45 np0005531754 podman[273688]: 2025-11-22 06:00:45.5112869 +0000 UTC m=+1.319698554 container died 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 01:00:45 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8-merged.mount: Deactivated successfully.
Nov 22 01:00:45 np0005531754 podman[273688]: 2025-11-22 06:00:45.592052044 +0000 UTC m=+1.400463658 container remove 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 01:00:45 np0005531754 systemd[1]: libpod-conmon-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Deactivated successfully.
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.377671886 +0000 UTC m=+0.046625650 container create fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 01:00:46 np0005531754 systemd[1]: Started libpod-conmon-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope.
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.356987602 +0000 UTC m=+0.025941386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.471757197 +0000 UTC m=+0.140711041 container init fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.479187227 +0000 UTC m=+0.148141011 container start fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.483355708 +0000 UTC m=+0.152309502 container attach fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 01:00:46 np0005531754 lucid_jennings[273906]: 167 167
Nov 22 01:00:46 np0005531754 systemd[1]: libpod-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope: Deactivated successfully.
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.488080555 +0000 UTC m=+0.157034339 container died fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:00:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c1219d5504e1f982af8c601e9827de70b292c9cbbdaef6ad1ba0f30e0f922fa9-merged.mount: Deactivated successfully.
Nov 22 01:00:46 np0005531754 podman[273889]: 2025-11-22 06:00:46.53417864 +0000 UTC m=+0.203132434 container remove fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:00:46 np0005531754 systemd[1]: libpod-conmon-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope: Deactivated successfully.
Nov 22 01:00:46 np0005531754 podman[273930]: 2025-11-22 06:00:46.731794145 +0000 UTC m=+0.068861856 container create f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 01:00:46 np0005531754 systemd[1]: Started libpod-conmon-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope.
Nov 22 01:00:46 np0005531754 podman[273930]: 2025-11-22 06:00:46.70436312 +0000 UTC m=+0.041430871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:46 np0005531754 podman[273930]: 2025-11-22 06:00:46.842572194 +0000 UTC m=+0.179639915 container init f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 01:00:46 np0005531754 podman[273930]: 2025-11-22 06:00:46.856790535 +0000 UTC m=+0.193858276 container start f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 01:00:46 np0005531754 podman[273930]: 2025-11-22 06:00:46.861631865 +0000 UTC m=+0.198699696 container attach f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:00:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:00:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:00:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:00:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:00:47 np0005531754 boring_turing[273947]: {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    "0": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "devices": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "/dev/loop3"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            ],
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_name": "ceph_lv0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_size": "21470642176",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "name": "ceph_lv0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "tags": {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_name": "ceph",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.crush_device_class": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.encrypted": "0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_id": "0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.vdo": "0"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            },
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "vg_name": "ceph_vg0"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        }
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    ],
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    "1": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "devices": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "/dev/loop4"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            ],
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_name": "ceph_lv1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_size": "21470642176",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "name": "ceph_lv1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "tags": {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_name": "ceph",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.crush_device_class": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.encrypted": "0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_id": "1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.vdo": "0"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            },
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "vg_name": "ceph_vg1"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        }
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    ],
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    "2": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "devices": [
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "/dev/loop5"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            ],
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_name": "ceph_lv2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_size": "21470642176",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "name": "ceph_lv2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "tags": {
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.cluster_name": "ceph",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.crush_device_class": "",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.encrypted": "0",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osd_id": "2",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:                "ceph.vdo": "0"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            },
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "type": "block",
Nov 22 01:00:47 np0005531754 boring_turing[273947]:            "vg_name": "ceph_vg2"
Nov 22 01:00:47 np0005531754 boring_turing[273947]:        }
Nov 22 01:00:47 np0005531754 boring_turing[273947]:    ]
Nov 22 01:00:47 np0005531754 boring_turing[273947]: }
Nov 22 01:00:47 np0005531754 systemd[1]: libpod-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope: Deactivated successfully.
Nov 22 01:00:47 np0005531754 podman[273956]: 2025-11-22 06:00:47.771003182 +0000 UTC m=+0.031821693 container died f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:00:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3-merged.mount: Deactivated successfully.
Nov 22 01:00:47 np0005531754 podman[273956]: 2025-11-22 06:00:47.853661537 +0000 UTC m=+0.114479978 container remove f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 01:00:47 np0005531754 systemd[1]: libpod-conmon-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope: Deactivated successfully.
Nov 22 01:00:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.68931047 +0000 UTC m=+0.056349361 container create b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:00:48 np0005531754 systemd[1]: Started libpod-conmon-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope.
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.663994382 +0000 UTC m=+0.031033363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.789261249 +0000 UTC m=+0.156300240 container init b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.801331662 +0000 UTC m=+0.168370553 container start b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.804828536 +0000 UTC m=+0.171867517 container attach b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 01:00:48 np0005531754 brave_noyce[274130]: 167 167
Nov 22 01:00:48 np0005531754 systemd[1]: libpod-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope: Deactivated successfully.
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.80985606 +0000 UTC m=+0.176894981 container died b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 22 01:00:48 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f0081c3e54f9d4b89f8d237b5b1f1d8caf58038075dc743022fe3e26ce1a9a25-merged.mount: Deactivated successfully.
Nov 22 01:00:48 np0005531754 podman[274113]: 2025-11-22 06:00:48.86545681 +0000 UTC m=+0.232495731 container remove b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 01:00:48 np0005531754 systemd[1]: libpod-conmon-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope: Deactivated successfully.
Nov 22 01:00:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:49 np0005531754 podman[274154]: 2025-11-22 06:00:49.081577561 +0000 UTC m=+0.056843324 container create 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:00:49 np0005531754 systemd[1]: Started libpod-conmon-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope.
Nov 22 01:00:49 np0005531754 podman[274154]: 2025-11-22 06:00:49.05614805 +0000 UTC m=+0.031413903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:00:49 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:00:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:49 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:00:49 np0005531754 podman[274154]: 2025-11-22 06:00:49.184227202 +0000 UTC m=+0.159492965 container init 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 01:00:49 np0005531754 podman[274154]: 2025-11-22 06:00:49.19236533 +0000 UTC m=+0.167631093 container start 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:00:49 np0005531754 podman[274154]: 2025-11-22 06:00:49.195874354 +0000 UTC m=+0.171140117 container attach 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]: {
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_id": 1,
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "type": "bluestore"
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    },
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_id": 2,
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "type": "bluestore"
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    },
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_id": 0,
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:        "type": "bluestore"
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]:    }
Nov 22 01:00:50 np0005531754 eager_bardeen[274170]: }
Nov 22 01:00:50 np0005531754 systemd[1]: libpod-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Deactivated successfully.
Nov 22 01:00:50 np0005531754 systemd[1]: libpod-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Consumed 1.028s CPU time.
Nov 22 01:00:50 np0005531754 podman[274204]: 2025-11-22 06:00:50.841588003 +0000 UTC m=+0.029936863 container died 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:00:50 np0005531754 systemd[1]: var-lib-containers-storage-overlay-175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2-merged.mount: Deactivated successfully.
Nov 22 01:00:50 np0005531754 podman[274204]: 2025-11-22 06:00:50.902574148 +0000 UTC m=+0.090922938 container remove 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:00:50 np0005531754 systemd[1]: libpod-conmon-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Deactivated successfully.
Nov 22 01:00:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:00:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:00:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:50 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 4e956076-3567-4abb-92c4-5ce3e49019cc does not exist
Nov 22 01:00:50 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev a4aecf8e-7765-4ca9-b37c-565eb916d38b does not exist
Nov 22 01:00:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:51 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:00:52 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:00:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:00:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:54 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:56 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:00:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:00:58 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:00 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:02 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:04 np0005531754 podman[274279]: 2025-11-22 06:01:04.25221348 +0000 UTC m=+0.112287540 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 01:01:04 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:06 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:08 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:10 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:12 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:13 np0005531754 podman[274305]: 2025-11-22 06:01:13.236593268 +0000 UTC m=+0.086418034 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 01:01:13 np0005531754 podman[274306]: 2025-11-22 06:01:13.262895572 +0000 UTC m=+0.106583373 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:01:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b60e5e0>)]
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b663eb0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b6634f0>)]
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.073686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274073765, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2525, "num_deletes": 513, "total_data_size": 3522358, "memory_usage": 3574864, "flush_reason": "Manual Compaction"}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274113643, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3253120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26102, "largest_seqno": 28626, "table_properties": {"data_size": 3242090, "index_size": 6564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27926, "raw_average_key_size": 20, "raw_value_size": 3217445, "raw_average_value_size": 2408, "num_data_blocks": 288, "num_entries": 1336, "num_filter_entries": 1336, "num_deletions": 513, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791083, "oldest_key_time": 1763791083, "file_creation_time": 1763791274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 40004 microseconds, and 8882 cpu microseconds.
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.113702) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3253120 bytes OK
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.113727) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118688) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118714) EVENT_LOG_v1 {"time_micros": 1763791274118706, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3510434, prev total WAL file size 3510434, number of live WAL files 2.
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.120803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3176KB)], [59(9531KB)]
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274120927, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13013634, "oldest_snapshot_seqno": -1}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5838 keys, 8435102 bytes, temperature: kUnknown
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274226303, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8435102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8395561, "index_size": 23815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 145951, "raw_average_key_size": 25, "raw_value_size": 8290366, "raw_average_value_size": 1420, "num_data_blocks": 977, "num_entries": 5838, "num_filter_entries": 5838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.226675) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8435102 bytes
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.228391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.4 rd, 80.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.6) write-amplify(2.6) OK, records in: 6859, records dropped: 1021 output_compression: NoCompression
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.228420) EVENT_LOG_v1 {"time_micros": 1763791274228406, "job": 32, "event": "compaction_finished", "compaction_time_micros": 105496, "compaction_time_cpu_micros": 40425, "output_level": 6, "num_output_files": 1, "total_output_size": 8435102, "num_input_records": 6859, "num_output_records": 5838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274229603, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274232865, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.120695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:01:14 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:15 np0005531754 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.mscchl(active, since 36m)
Nov 22 01:01:16 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 22 01:01:17 np0005531754 nova_compute[255660]: 2025-11-22 06:01:17.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:18 np0005531754 systemd-logind[798]: New session 51 of user zuul.
Nov 22 01:01:18 np0005531754 systemd[1]: Started Session 51 of User zuul.
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.290 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.292 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:01:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:01:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361735710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.728 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.887 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:01:18 np0005531754 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:01:18 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.774 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.774 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.854 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.946 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.947 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.966 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 01:01:19 np0005531754 nova_compute[255660]: 2025-11-22 06:01:19.995 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.022 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:01:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:01:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191580808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.515 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.522 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.552 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.554 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:01:20 np0005531754 nova_compute[255660]: 2025-11-22 06:01:20.554 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:01:20 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 01:01:21 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14509 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:22 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14511 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 01:01:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678055306' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 01:01:22 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 01:01:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:24 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 01:01:26 np0005531754 nova_compute[255660]: 2025-11-22 06:01:26.551 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:26 np0005531754 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:26 np0005531754 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:26 np0005531754 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:01:26 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 01:01:27 np0005531754 nova_compute[255660]: 2025-11-22 06:01:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:27 np0005531754 nova_compute[255660]: 2025-11-22 06:01:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:28 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 22 01:01:29 np0005531754 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:29 np0005531754 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:29 np0005531754 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:30 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:32 np0005531754 nova_compute[255660]: 2025-11-22 06:01:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:01:32 np0005531754 nova_compute[255660]: 2025-11-22 06:01:32.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:01:32 np0005531754 nova_compute[255660]: 2025-11-22 06:01:32.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:01:32 np0005531754 nova_compute[255660]: 2025-11-22 06:01:32.156 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:01:32 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:34 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:35 np0005531754 podman[274669]: 2025-11-22 06:01:35.263142644 +0000 UTC m=+0.114500015 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 01:01:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:01:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.942 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:01:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.942 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:01:36 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:38 np0005531754 ovs-vsctl[274745]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 01:01:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:38 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:39 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 01:01:39 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 01:01:39 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 01:01:40 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: cache status {prefix=cache status} (starting...)
Nov 22 01:01:40 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: client ls {prefix=client ls} (starting...)
Nov 22 01:01:40 np0005531754 lvm[275104]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 01:01:40 np0005531754 lvm[275104]: VG ceph_vg1 finished
Nov 22 01:01:40 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:40 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 01:01:40 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 01:01:41 np0005531754 lvm[275144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 01:01:41 np0005531754 lvm[275144]: VG ceph_vg2 finished
Nov 22 01:01:41 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:41 np0005531754 lvm[275155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 01:01:41 np0005531754 lvm[275155]: VG ceph_vg0 finished
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 01:01:41 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:41 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:01:41 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:41.868+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:01:41 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 01:01:42 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333229747' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 01:01:42 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: ops {prefix=ops} (starting...)
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217697821' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765355900' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119397056' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 01:01:42 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 01:01:42 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864016272' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 01:01:43 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session ls {prefix=session ls} (starting...)
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881267314' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 01:01:43 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: status {prefix=status} (starting...)
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738643034' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578060366' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:01:43
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root']
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:01:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 01:01:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990816085' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14543 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:44 np0005531754 podman[275580]: 2025-11-22 06:01:44.232643398 +0000 UTC m=+0.093572395 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 01:01:44 np0005531754 podman[275582]: 2025-11-22 06:01:44.243367336 +0000 UTC m=+0.103295046 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187603636' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948242793' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 01:01:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256204332' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 01:01:44 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 01:01:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388120625' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 01:01:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14555 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:45 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 01:01:45 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:45.571+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 01:01:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:45 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982758084' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956909896' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 1769472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 1769472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812521 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813669 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843079567s of 12.888220787s, submitted: 10
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817114 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 1720320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 1712128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818263 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 1695744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 1687552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 1679360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 1679360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820561 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 1671168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.874602318s of 15.915491104s, submitted: 12
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 1638400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 1638400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824007 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828597 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.916749001s of 10.982564926s, submitted: 18
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833189 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834337 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835486 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.905089378s of 12.933971405s, submitted: 8
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838933 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840082 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.852154732s of 12.882711411s, submitted: 8
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841229 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842378 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 1515520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884706497s of 10.928488731s, submitted: 10
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 1482752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846972 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848120 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849267 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922524452s of 10.949849129s, submitted: 6
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 1417216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850414 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853856 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.000711441s of 11.032500267s, submitted: 8
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856151 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 1327104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857298 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.984521866s of 13.017908096s, submitted: 8
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 1204224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 1171456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 18.55 MB, 0.03 MB/s#012Interval WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 345.672515869s of 345.679687500s, submitted: 2
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 1769472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1662976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 1613824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27ae5fc00
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 ms_handle_reset con 0x55c27d491c00 session 0x55c27c84cd20
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.100036621s of 600.098510742s, submitted: 90
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 01:01:46 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220139389' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 187.620025635s of 187.940231323s, submitted: 90
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981955 data_alloc: 218103808 data_used: 184320
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 24182784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 130 ms_handle_reset con 0x55c27dd64c00 session 0x55c27b9cda40
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24158208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10bd8da/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,1])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba98000/0x0/0x4ffc00000, data 0x10bd90d/0x1185000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 131 ms_handle_reset con 0x55c27dd65000 session 0x55c27d99da40
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991578 data_alloc: 218103808 data_used: 188416
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228631973s of 10.493903160s, submitted: 48
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993528 data_alloc: 218103808 data_used: 192512
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.048984528s of 17.207933426s, submitted: 15
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 23986176 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 23969792 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 10
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba8c000/0x0/0x4ffc00000, data 0x10c6f88/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 23945216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996008 data_alloc: 218103808 data_used: 192512
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 23879680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 23617536 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 23306240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000508 data_alloc: 218103808 data_used: 192512
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 23371776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 23240704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 11
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.229097366s of 10.394592285s, submitted: 43
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10e64b7/0x11b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 23126016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 23117824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 23044096 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001568 data_alloc: 218103808 data_used: 192512
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba61000/0x0/0x4ffc00000, data 0x10f0ecd/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 23019520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba57000/0x0/0x4ffc00000, data 0x10fbde4/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 21725184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 20561920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba4b000/0x0/0x4ffc00000, data 0x110823a/0x11d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008874 data_alloc: 218103808 data_used: 200704
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060367584s of 10.409746170s, submitted: 65
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1116701/0x11e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007998 data_alloc: 218103808 data_used: 200704
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 20340736 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x11254f0/0x11f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 20250624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 20373504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 20299776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013872 data_alloc: 218103808 data_used: 208896
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 20226048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x113fb10/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.961433411s of 10.210658073s, submitted: 54
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010722 data_alloc: 218103808 data_used: 208896
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 20078592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 20054016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9fb000/0x0/0x4ffc00000, data 0x1156181/0x1223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 20045824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 19922944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9f6000/0x0/0x4ffc00000, data 0x115af0e/0x1228000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011714 data_alloc: 218103808 data_used: 208896
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa854000/0x0/0x4ffc00000, data 0x115cd59/0x122a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.710002899s of 10.840860367s, submitted: 28
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x1166919/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010124 data_alloc: 218103808 data_used: 208896
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011676 data_alloc: 218103808 data_used: 208896
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 17539072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.881333351s of 10.000229836s, submitted: 24
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1180b27/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017194 data_alloc: 218103808 data_used: 217088
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 16302080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0x118a04b/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 16261120 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16220160 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020498 data_alloc: 218103808 data_used: 217088
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0x119b383/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 16097280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.363933563s of 10.000885010s, submitted: 68
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022282 data_alloc: 218103808 data_used: 217088
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7f7000/0x0/0x4ffc00000, data 0x11b5e3b/0x1286000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 16187392 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 16138240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027624 data_alloc: 218103808 data_used: 225280
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7e0000/0x0/0x4ffc00000, data 0x11ccd00/0x129d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 15966208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 14663680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.663500786s of 10.003384590s, submitted: 58
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x11f6ec0/0x12c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030366 data_alloc: 218103808 data_used: 225280
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 14467072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa796000/0x0/0x4ffc00000, data 0x1216d90/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa783000/0x0/0x4ffc00000, data 0x122a36a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 13795328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 13811712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052002 data_alloc: 218103808 data_used: 233472
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 13123584 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa71f000/0x0/0x4ffc00000, data 0x12889a8/0x135d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x12a03d6/0x1375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.410496712s of 10.063361168s, submitted: 153
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 12115968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049980 data_alloc: 218103808 data_used: 233472
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12025856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 11812864 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 10698752 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa6c4000/0x0/0x4ffc00000, data 0x12e34ad/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 11739136 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062908 data_alloc: 218103808 data_used: 241664
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 12328960 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0x130b8fd/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.388790131s of 10.036962509s, submitted: 152
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 12066816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 11968512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067180 data_alloc: 218103808 data_used: 249856
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 11952128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa65e000/0x0/0x4ffc00000, data 0x1348b92/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 10993664 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa647000/0x0/0x4ffc00000, data 0x1360b19/0x1437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069652 data_alloc: 218103808 data_used: 258048
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 10616832 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.325051308s of 10.040717125s, submitted: 60
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075246 data_alloc: 218103808 data_used: 266240
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa629000/0x0/0x4ffc00000, data 0x137b131/0x1454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 10461184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079110 data_alloc: 218103808 data_used: 266240
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 10428416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 10395648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ed000/0x0/0x4ffc00000, data 0x13b6889/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.324265480s of 11.515779495s, submitted: 35
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077896 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 10321920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ee000/0x0/0x4ffc00000, data 0x13b67ee/0x1490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086088 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9232384 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa59e000/0x0/0x4ffc00000, data 0x1404cc1/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9199616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 9625600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093304 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734145164s of 11.004765511s, submitted: 52
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa56b000/0x0/0x4ffc00000, data 0x1438f3c/0x1513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x147084c/0x154b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095534 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9150464 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x14983d4/0x1572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095530 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.397135735s of 11.280517578s, submitted: 59
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7593984 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 7479296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x14d321d/0x15ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098050 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 7462912 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099392 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.184447289s of 12.378032684s, submitted: 30
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098094 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098462 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 12
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:46 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.734287262s of 10.033769608s, submitted: 19
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4cd000/0x0/0x4ffc00000, data 0x14d592e/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099814 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101084 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100090 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359554291s of 11.549299240s, submitted: 18
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 7864320 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d5816/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.601215363s of 11.686765671s, submitted: 14
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100378 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101794 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d58b5/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926100731s of 11.090178490s, submitted: 16
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101618 data_alloc: 218103808 data_used: 274432
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 7979008 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106954 data_alloc: 218103808 data_used: 282624
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d742f/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x14d7530/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.746568680s of 10.998859406s, submitted: 51
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108006 data_alloc: 218103808 data_used: 282624
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115008 data_alloc: 218103808 data_used: 290816
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14daac0/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc543/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119046 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848780632s of 10.849118233s, submitted: 70
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc608/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119676 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc541/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119372 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.558925629s of 10.797169685s, submitted: 23
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120626 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 7766016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 ms_handle_reset con 0x55c27dd65800 session 0x55c27d401c20
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14dc44b/0x15bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 7143424 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121304 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 13
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979153633s of 11.168646812s, submitted: 206
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120872 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc5e2/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122944 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc5ad/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.424748421s of 10.796654701s, submitted: 27
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 7069696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123366 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc5e1/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 6905856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1508cc2/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130868 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 5865472 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90677248 unmapped: 5750784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 5332992 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa02d000/0x0/0x4ffc00000, data 0x155d2e5/0x163f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.726997375s of 10.987854004s, submitted: 60
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142070 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 4874240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 4759552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9ff1000/0x0/0x4ffc00000, data 0x159c2c8/0x167d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143868 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 3416064 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 3145728 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.212817192s of 10.548931122s, submitted: 84
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151538 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 3604480 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 3596288 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 3563520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158170 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2383872 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2220032 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9eb9000/0x0/0x4ffc00000, data 0x16d2996/0x17b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 2670592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93806592 unmapped: 2621440 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x1705e04/0x17e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154906 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.826278687s of 11.162016869s, submitted: 70
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e79000/0x0/0x4ffc00000, data 0x1714710/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93995008 unmapped: 2433024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e56000/0x0/0x4ffc00000, data 0x1736fbe/0x1817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2236416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2228224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e42000/0x0/0x4ffc00000, data 0x174b9d7/0x182c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1179648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161486 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e15000/0x0/0x4ffc00000, data 0x17766f9/0x1858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 1916928 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x17933f1/0x1874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 1867776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170682 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 2760704 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.836094856s of 10.913021088s, submitted: 68
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d8f000/0x0/0x4ffc00000, data 0x17fcccc/0x18de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95117312 unmapped: 2359296 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1433600 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172456 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d5a000/0x0/0x4ffc00000, data 0x1832e2f/0x1913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180160 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 1990656 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 2457600 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.346602440s of 10.624962807s, submitted: 67
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96133120 unmapped: 2392064 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cf2000/0x0/0x4ffc00000, data 0x189b875/0x197c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s#012Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179516 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 2260992 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cc2000/0x0/0x4ffc00000, data 0x18ca961/0x19ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 2039808 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27c775400
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670079231s of 13.884990692s, submitted: 22
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177262 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.552337646s of 21.707801819s, submitted: 8
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176268 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177860 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.837540627s of 11.850649834s, submitted: 3
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179404 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177186 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178762 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.027555466s of 13.073743820s, submitted: 11
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179664 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.837194443s of 10.898418427s, submitted: 7
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 827392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 901120 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184054 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc750/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185166 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 1933312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.652610779s of 11.716604233s, submitted: 16
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185554 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 1867776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186584 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b3/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185590 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.021935463s of 12.413156509s, submitted: 105
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186668 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185978 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185128 data_alloc: 218103808 data_used: 299008
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.800412178s of 13.922379494s, submitted: 10
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189286 data_alloc: 218103808 data_used: 307200
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 843776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189462 data_alloc: 218103808 data_used: 307200
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.972100258s of 12.103911400s, submitted: 26
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192404 data_alloc: 218103808 data_used: 315392
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 14
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfaff/0x19c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc11/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 761856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193468 data_alloc: 218103808 data_used: 315392
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193644 data_alloc: 218103808 data_used: 315392
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.880873680s of 12.922811508s, submitted: 19
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18dfcd0/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198760 data_alloc: 218103808 data_used: 315392
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc35/0x19c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197700 data_alloc: 218103808 data_used: 323584
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.758671761s of 10.983880043s, submitted: 61
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197876 data_alloc: 218103808 data_used: 323584
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201698 data_alloc: 218103808 data_used: 331776
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.046489716s of 11.064700127s, submitted: 14
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202762 data_alloc: 218103808 data_used: 331776
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202586 data_alloc: 218103808 data_used: 331776
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.863492966s of 10.986426353s, submitted: 33
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18e4d7e/0x19ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206958 data_alloc: 218103808 data_used: 339968
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1810432 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214634 data_alloc: 218103808 data_used: 352256
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e8559/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214978 data_alloc: 218103808 data_used: 352256
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.761722565s of 12.017519951s, submitted: 41
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 1761280 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 1744896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223380 data_alloc: 218103808 data_used: 360448
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c95000/0x0/0x4ffc00000, data 0x18ebca8/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220608 data_alloc: 218103808 data_used: 360448
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.814929008s of 10.932528496s, submitted: 43
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c98000/0x0/0x4ffc00000, data 0x18ebaf8/0x19d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224766 data_alloc: 218103808 data_used: 368640
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226534 data_alloc: 218103808 data_used: 368640
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.052343369s of 13.091160774s, submitted: 15
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225494 data_alloc: 218103808 data_used: 368640
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c94000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226380 data_alloc: 218103808 data_used: 368640
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228808 data_alloc: 218103808 data_used: 376832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.807965279s of 14.139179230s, submitted: 58
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229000 data_alloc: 218103808 data_used: 376832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 161 heartbeat osd_stat(store_statfs(0x4f9c8a000/0x0/0x4ffc00000, data 0x18f27c6/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236804 data_alloc: 218103808 data_used: 385024
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100155392 unmapped: 1515520 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239106 data_alloc: 218103808 data_used: 385024
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.630488396s of 11.893076897s, submitted: 66
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242096 data_alloc: 218103808 data_used: 385024
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 163 heartbeat osd_stat(store_statfs(0x4f9c85000/0x0/0x4ffc00000, data 0x18f5e0f/0x19e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245550 data_alloc: 218103808 data_used: 393216
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.243680000s of 11.435800552s, submitted: 63
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c81000/0x0/0x4ffc00000, data 0x18f7ac0/0x19ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249940 data_alloc: 218103808 data_used: 393216
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100229120 unmapped: 1441792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 166 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x18fb08e/0x19f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255022 data_alloc: 218103808 data_used: 393216
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x18fcca4/0x19f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.215492249s of 12.445914268s, submitted: 69
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258316 data_alloc: 218103808 data_used: 401408
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258492 data_alloc: 218103808 data_used: 401408
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100286464 unmapped: 1384448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 169 ms_handle_reset con 0x55c27dd65000 session 0x55c27f3a21e0
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 1048576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 15
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261466 data_alloc: 218103808 data_used: 401408
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.873164177s of 11.996927261s, submitted: 252
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260762 data_alloc: 218103808 data_used: 401408
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9864000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.363555908s of 56.390811920s, submitted: 15
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 ms_handle_reset con 0x55c27dd65400 session 0x55c27c84cd20
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 16
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 638976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1933312 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2179072 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:01:47 np0005531754 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 01:01:47 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 01:01:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885789032' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14579 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 01:01:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937092397' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 01:01:47 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14583 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:01:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 01:01:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316180778' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 01:01:48 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:01:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:01:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 01:01:48 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451420270' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 01:01:48 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:49 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14593 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:01:49 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:01:49 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:49.072+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1459366565' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366959193' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 01:01:49 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763088824' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772189802' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109510212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580993086' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 01:01:50 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158254947' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 01:01:50 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097651303' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682878500' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013787005' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152734359' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 2932736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 2924544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843165 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 2924544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.113706589s of 10.150311470s, submitted: 10
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 2908160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846608 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 2908160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 2883584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846608 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 2883584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 2875392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 2875392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.896636963s of 10.933871269s, submitted: 8
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 2867200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 2859008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850049 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 2850816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 2850816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850049 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 2834432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 2834432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 2826240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 2826240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851196 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.920650482s of 11.937989235s, submitted: 4
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 2809856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 2809856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 2801664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852343 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 2801664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 2785280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853490 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 2785280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.758671761s of 11.776473999s, submitted: 4
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 2777088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 2777088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 854638 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 2752512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 2752512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856934 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980106354s of 10.004346848s, submitted: 6
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 2736128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 2736128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858082 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860377 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 2711552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 2711552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.136055946s of 11.168321609s, submitted: 8
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 2703360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 2703360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863819 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 2686976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 2686976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 2670592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867263 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 2670592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 2662400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 2654208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 2654208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868411 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 2637824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 2637824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.133054733s of 16.182121277s, submitted: 12
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 2629632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869559 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 2629632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 2613248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 2613248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 2605056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 2605056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871855 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 2596864 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 2580480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 2580480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 2572288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.748580933s of 10.789328575s, submitted: 10
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 2572288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875302 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 2555904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 2555904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876451 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 2547712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 2539520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 2531328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 2531328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.051478386s of 10.075274467s, submitted: 6
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878748 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 2514944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 2514944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879896 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 2506752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 2506752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 2498560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 2498560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 2490368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879896 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 2490368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036729813s of 12.049832344s, submitted: 4
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 2482176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 2465792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 2465792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 2457600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881044 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 2457600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 2441216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884490 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 2433024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 2433024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885639 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083676338s of 14.125527382s, submitted: 10
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 2416640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 2416640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 2400256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 2400256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 2392064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 2392064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 2375680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 2375680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 2367488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 2359296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 2359296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 2342912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 2342912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 2334720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 2334720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 2310144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 2310144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 2301952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 2301952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 2277376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 2277376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 2260992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 2260992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 2252800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 2252800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 2236416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 2236416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 2228224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 2220032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 2220032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 2211840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 2211840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 2203648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 2203648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 2187264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 2187264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 2179072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 2179072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 2170880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:51 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 2170880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 2162688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 2154496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 2154496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 2138112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 2138112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 2129920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 2129920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 2121728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 2121728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 2097152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 2097152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 2088960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 2088960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 2080768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 2080768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 2072576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 2072576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 2056192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 2048000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 2031616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 2031616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 2023424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 2023424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 2007040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 2007040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1990656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1990656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1974272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1974272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1957888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1957888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1949696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1949696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 1933312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 1933312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 1916928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 1916928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 1900544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 1900544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 1892352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 1884160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 1884160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 1867776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 1867776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 1851392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 1851392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 1843200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 1843200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 1818624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 1818624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 1810432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 1810432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 1802240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 1777664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 1777664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 1736704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 1654784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 1654784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 1638400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 1638400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 1622016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 19.67 MB, 0.03 MB/s#012Interval WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1556480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1515520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1515520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 1507328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 1449984 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 1449984 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1400832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1392640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1376256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1376256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1368064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1368064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 315.175384521s of 315.207031250s, submitted: 8
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:01:52 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.809733342 +0000 UTC m=+0.055689212 container create fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:03:04 np0005531754 systemd[1]: Started libpod-conmon-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope.
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.782680838 +0000 UTC m=+0.028636748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:03:04 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.910066268 +0000 UTC m=+0.156022178 container init fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.917152987 +0000 UTC m=+0.163108817 container start fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.920847725 +0000 UTC m=+0.166803595 container attach fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:03:04 np0005531754 fervent_ride[282895]: 167 167
Nov 22 01:03:04 np0005531754 systemd[1]: libpod-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope: Deactivated successfully.
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.92513747 +0000 UTC m=+0.171093340 container died fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 01:03:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-c85bc458c14c0a3b3ed1ec4521d5c76920a64a0da8759266cff503d1e802d15a-merged.mount: Deactivated successfully.
Nov 22 01:03:04 np0005531754 podman[282879]: 2025-11-22 06:03:04.970678389 +0000 UTC m=+0.216634209 container remove fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:03:04 np0005531754 systemd[1]: libpod-conmon-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope: Deactivated successfully.
Nov 22 01:03:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:05 np0005531754 rsyslogd[1005]: imjournal: 18163 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 01:03:05 np0005531754 podman[282919]: 2025-11-22 06:03:05.151044097 +0000 UTC m=+0.057294725 container create b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 01:03:05 np0005531754 systemd[1]: Started libpod-conmon-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope.
Nov 22 01:03:05 np0005531754 podman[282919]: 2025-11-22 06:03:05.122877893 +0000 UTC m=+0.029128571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:03:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:03:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:03:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:03:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:03:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:03:05 np0005531754 podman[282919]: 2025-11-22 06:03:05.268064168 +0000 UTC m=+0.174314846 container init b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:03:05 np0005531754 podman[282919]: 2025-11-22 06:03:05.279781442 +0000 UTC m=+0.186032040 container start b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:03:05 np0005531754 podman[282919]: 2025-11-22 06:03:05.283021258 +0000 UTC m=+0.189271946 container attach b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:03:06 np0005531754 angry_ride[282936]: {
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_id": 1,
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "type": "bluestore"
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    },
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_id": 2,
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "type": "bluestore"
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    },
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_id": 0,
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:03:06 np0005531754 angry_ride[282936]:        "type": "bluestore"
Nov 22 01:03:06 np0005531754 angry_ride[282936]:    }
Nov 22 01:03:06 np0005531754 angry_ride[282936]: }
Nov 22 01:03:06 np0005531754 systemd[1]: libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Deactivated successfully.
Nov 22 01:03:06 np0005531754 systemd[1]: libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Consumed 1.097s CPU time.
Nov 22 01:03:06 np0005531754 conmon[282936]: conmon b3b9bc73b51b08e15a46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope/container/memory.events
Nov 22 01:03:06 np0005531754 podman[282919]: 2025-11-22 06:03:06.372798104 +0000 UTC m=+1.279048702 container died b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 01:03:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5-merged.mount: Deactivated successfully.
Nov 22 01:03:06 np0005531754 podman[282919]: 2025-11-22 06:03:06.434588708 +0000 UTC m=+1.340839306 container remove b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:03:06 np0005531754 systemd[1]: libpod-conmon-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Deactivated successfully.
Nov 22 01:03:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:03:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:03:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:03:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:03:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ce001fc-73ec-43d7-bdd9-b0d92d548003 does not exist
Nov 22 01:03:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f619ebac-5ce0-4134-9420-217e9a53f055 does not exist
Nov 22 01:03:06 np0005531754 podman[282970]: 2025-11-22 06:03:06.5258191 +0000 UTC m=+0.114083815 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 01:03:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:03:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:03:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:16 np0005531754 podman[283055]: 2025-11-22 06:03:16.229633705 +0000 UTC m=+0.078049629 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 01:03:16 np0005531754 podman[283056]: 2025-11-22 06:03:16.249323822 +0000 UTC m=+0.103287115 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Nov 22 01:03:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.167 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.169 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:03:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:03:18 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161789297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.680 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.889 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.891 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4954MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.957 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.957 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:03:18 np0005531754 nova_compute[255660]: 2025-11-22 06:03:18.972 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:03:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:03:19 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645355972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:03:19 np0005531754 nova_compute[255660]: 2025-11-22 06:03:19.462 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:03:19 np0005531754 nova_compute[255660]: 2025-11-22 06:03:19.468 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:03:19 np0005531754 nova_compute[255660]: 2025-11-22 06:03:19.484 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:03:19 np0005531754 nova_compute[255660]: 2025-11-22 06:03:19.487 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:03:19 np0005531754 nova_compute[255660]: 2025-11-22 06:03:19.487 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:03:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:25 np0005531754 nova_compute[255660]: 2025-11-22 06:03:25.483 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:26 np0005531754 nova_compute[255660]: 2025-11-22 06:03:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:26 np0005531754 nova_compute[255660]: 2025-11-22 06:03:26.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:03:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:27 np0005531754 nova_compute[255660]: 2025-11-22 06:03:27.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:28 np0005531754 nova_compute[255660]: 2025-11-22 06:03:28.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:29 np0005531754 nova_compute[255660]: 2025-11-22 06:03:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:31 np0005531754 nova_compute[255660]: 2025-11-22 06:03:31.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:31 np0005531754 nova_compute[255660]: 2025-11-22 06:03:31.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:32 np0005531754 nova_compute[255660]: 2025-11-22 06:03:32.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:35 np0005531754 nova_compute[255660]: 2025-11-22 06:03:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:03:35 np0005531754 nova_compute[255660]: 2025-11-22 06:03:35.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:03:35 np0005531754 nova_compute[255660]: 2025-11-22 06:03:35.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:03:35 np0005531754 nova_compute[255660]: 2025-11-22 06:03:35.145 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:03:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.944 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:03:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:03:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:03:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:37 np0005531754 podman[283138]: 2025-11-22 06:03:37.242499072 +0000 UTC m=+0.101852947 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 22 01:03:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:03:43
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'vms', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:03:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:03:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:03:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:47 np0005531754 podman[283166]: 2025-11-22 06:03:47.213730474 +0000 UTC m=+0.063584403 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 01:03:47 np0005531754 podman[283165]: 2025-11-22 06:03:47.238560839 +0000 UTC m=+0.089891727 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 01:03:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:03:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:03:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:03:57 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6557 writes, 30K keys, 6557 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6557 writes, 6557 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1710 writes, 8330 keys, 1710 commit groups, 1.0 writes per commit group, ingest: 10.39 MB, 0.02 MB/s#012Interval WAL: 1710 writes, 1710 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.31              0.13        16    0.019       0      0       0.0       0.0#012  L6      1/0    8.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    135.7    111.3      1.00              0.40        15    0.067     72K   8389       0.0       0.0#012 Sum      1/0    8.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    103.7    109.8      1.31              0.54        31    0.042     72K   8389       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    103.6    106.0      0.41              0.14         8    0.051     24K   2605       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.7    111.3      1.00              0.40        15    0.067     72K   8389       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    108.2      0.30              0.13        15    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 304.00 MB usage: 15.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00024 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1246,15.18 MB,4.9939%) FilterBlock(32,211.23 KB,0.0678564%) IndexBlock(32,389.08 KB,0.124987%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 01:03:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:03:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:03:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5c923816-e00b-415c-bb75-0c16c40f27f7 does not exist
Nov 22 01:04:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 45b6737c-0337-4b8f-860a-991a7b628c29 does not exist
Nov 22 01:04:07 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 62679f6e-2889-42d7-b125-ddd058eb63f3 does not exist
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:04:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:04:07 np0005531754 podman[283355]: 2025-11-22 06:04:07.900149783 +0000 UTC m=+0.107737675 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 01:04:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:04:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:08 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.446122915 +0000 UTC m=+0.056234666 container create e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 01:04:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:08 np0005531754 systemd[1]: Started libpod-conmon-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope.
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.419057561 +0000 UTC m=+0.029169312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.562989882 +0000 UTC m=+0.173101623 container init e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.575754824 +0000 UTC m=+0.185866535 container start e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.580272406 +0000 UTC m=+0.190384157 container attach e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 01:04:08 np0005531754 lucid_cori[283512]: 167 167
Nov 22 01:04:08 np0005531754 systemd[1]: libpod-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope: Deactivated successfully.
Nov 22 01:04:08 np0005531754 conmon[283512]: conmon e9116e9a8eec64c96d11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope/container/memory.events
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.586138122 +0000 UTC m=+0.196249833 container died e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 01:04:08 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0477f754f42a9163ceebe41f236b5b55d54a5eea7ed928e1e7be5075fac45179-merged.mount: Deactivated successfully.
Nov 22 01:04:08 np0005531754 podman[283496]: 2025-11-22 06:04:08.657716398 +0000 UTC m=+0.267828119 container remove e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 01:04:08 np0005531754 systemd[1]: libpod-conmon-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope: Deactivated successfully.
Nov 22 01:04:08 np0005531754 podman[283538]: 2025-11-22 06:04:08.892846901 +0000 UTC m=+0.056320628 container create a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:04:08 np0005531754 systemd[1]: Started libpod-conmon-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope.
Nov 22 01:04:08 np0005531754 podman[283538]: 2025-11-22 06:04:08.868214031 +0000 UTC m=+0.031687868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:08 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:08 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:08 np0005531754 podman[283538]: 2025-11-22 06:04:08.994116491 +0000 UTC m=+0.157590228 container init a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:04:09 np0005531754 podman[283538]: 2025-11-22 06:04:09.005103505 +0000 UTC m=+0.168577222 container start a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 01:04:09 np0005531754 podman[283538]: 2025-11-22 06:04:09.008440245 +0000 UTC m=+0.171913962 container attach a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:04:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:10 np0005531754 modest_lumiere[283555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:04:10 np0005531754 modest_lumiere[283555]: --> relative data size: 1.0
Nov 22 01:04:10 np0005531754 modest_lumiere[283555]: --> All data devices are unavailable
Nov 22 01:04:10 np0005531754 systemd[1]: libpod-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Deactivated successfully.
Nov 22 01:04:10 np0005531754 podman[283538]: 2025-11-22 06:04:10.13706126 +0000 UTC m=+1.300535007 container died a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:04:10 np0005531754 systemd[1]: libpod-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Consumed 1.085s CPU time.
Nov 22 01:04:10 np0005531754 systemd[1]: var-lib-containers-storage-overlay-379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930-merged.mount: Deactivated successfully.
Nov 22 01:04:10 np0005531754 podman[283538]: 2025-11-22 06:04:10.2084232 +0000 UTC m=+1.371896947 container remove a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:04:10 np0005531754 systemd[1]: libpod-conmon-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Deactivated successfully.
Nov 22 01:04:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.086445079 +0000 UTC m=+0.064125447 container create 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:04:11 np0005531754 systemd[1]: Started libpod-conmon-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope.
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.062621152 +0000 UTC m=+0.040301600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.184504473 +0000 UTC m=+0.162184861 container init 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.195796126 +0000 UTC m=+0.173476494 container start 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.199903636 +0000 UTC m=+0.177584034 container attach 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:04:11 np0005531754 sweet_hermann[283755]: 167 167
Nov 22 01:04:11 np0005531754 systemd[1]: libpod-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope: Deactivated successfully.
Nov 22 01:04:11 np0005531754 conmon[283755]: conmon 11a2df632166b094b6d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope/container/memory.events
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.203234934 +0000 UTC m=+0.180915292 container died 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 01:04:11 np0005531754 systemd[1]: var-lib-containers-storage-overlay-2f2a6167f8789594a165953546ac4805187a64685a41c18b19cee3d7ef6fe302-merged.mount: Deactivated successfully.
Nov 22 01:04:11 np0005531754 podman[283739]: 2025-11-22 06:04:11.254306952 +0000 UTC m=+0.231987340 container remove 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:04:11 np0005531754 systemd[1]: libpod-conmon-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope: Deactivated successfully.
Nov 22 01:04:11 np0005531754 podman[283779]: 2025-11-22 06:04:11.529210089 +0000 UTC m=+0.091695636 container create e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 01:04:11 np0005531754 podman[283779]: 2025-11-22 06:04:11.471206116 +0000 UTC m=+0.033691723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:11 np0005531754 systemd[1]: Started libpod-conmon-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope.
Nov 22 01:04:11 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:11 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:11 np0005531754 podman[283779]: 2025-11-22 06:04:11.630819638 +0000 UTC m=+0.193305225 container init e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:04:11 np0005531754 podman[283779]: 2025-11-22 06:04:11.648810619 +0000 UTC m=+0.211296126 container start e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:04:11 np0005531754 podman[283779]: 2025-11-22 06:04:11.652504178 +0000 UTC m=+0.214989685 container attach e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]: {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    "0": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "devices": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "/dev/loop3"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            ],
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_name": "ceph_lv0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_size": "21470642176",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "name": "ceph_lv0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "tags": {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_name": "ceph",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.crush_device_class": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.encrypted": "0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_id": "0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.vdo": "0"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            },
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "vg_name": "ceph_vg0"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        }
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    ],
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    "1": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "devices": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "/dev/loop4"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            ],
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_name": "ceph_lv1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_size": "21470642176",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "name": "ceph_lv1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "tags": {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_name": "ceph",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.crush_device_class": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.encrypted": "0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_id": "1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.vdo": "0"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            },
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "vg_name": "ceph_vg1"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        }
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    ],
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    "2": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "devices": [
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "/dev/loop5"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            ],
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_name": "ceph_lv2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_size": "21470642176",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "name": "ceph_lv2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "tags": {
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.cluster_name": "ceph",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.crush_device_class": "",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.encrypted": "0",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osd_id": "2",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:                "ceph.vdo": "0"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            },
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "type": "block",
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:            "vg_name": "ceph_vg2"
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:        }
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]:    ]
Nov 22 01:04:12 np0005531754 blissful_pasteur[283796]: }
Nov 22 01:04:12 np0005531754 systemd[1]: libpod-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope: Deactivated successfully.
Nov 22 01:04:12 np0005531754 podman[283779]: 2025-11-22 06:04:12.443990641 +0000 UTC m=+1.006476148 container died e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:04:12 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954-merged.mount: Deactivated successfully.
Nov 22 01:04:12 np0005531754 podman[283779]: 2025-11-22 06:04:12.503388121 +0000 UTC m=+1.065873638 container remove e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:04:12 np0005531754 systemd[1]: libpod-conmon-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope: Deactivated successfully.
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.291267747 +0000 UTC m=+0.066191752 container create d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 01:04:13 np0005531754 systemd[1]: Started libpod-conmon-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope.
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.264527411 +0000 UTC m=+0.039451506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:13 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.378820301 +0000 UTC m=+0.153744376 container init d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.39078267 +0000 UTC m=+0.165706715 container start d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.395073546 +0000 UTC m=+0.169997601 container attach d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:04:13 np0005531754 systemd[1]: libpod-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope: Deactivated successfully.
Nov 22 01:04:13 np0005531754 agitated_mahavira[283974]: 167 167
Nov 22 01:04:13 np0005531754 conmon[283974]: conmon d0ff49ac4730608a5738 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope/container/memory.events
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.397618483 +0000 UTC m=+0.172542548 container died d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 01:04:13 np0005531754 systemd[1]: var-lib-containers-storage-overlay-40fa16042e57314477069bb1c1f1d002074b2bd30d0d043d09f2152043281f13-merged.mount: Deactivated successfully.
Nov 22 01:04:13 np0005531754 podman[283957]: 2025-11-22 06:04:13.447504219 +0000 UTC m=+0.222428254 container remove d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 01:04:13 np0005531754 systemd[1]: libpod-conmon-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope: Deactivated successfully.
Nov 22 01:04:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:13 np0005531754 podman[283996]: 2025-11-22 06:04:13.653145533 +0000 UTC m=+0.065481434 container create 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:04:13 np0005531754 systemd[1]: Started libpod-conmon-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope.
Nov 22 01:04:13 np0005531754 podman[283996]: 2025-11-22 06:04:13.625688027 +0000 UTC m=+0.038023968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:04:13 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:04:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:13 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:04:13 np0005531754 podman[283996]: 2025-11-22 06:04:13.772395624 +0000 UTC m=+0.184731575 container init 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 01:04:13 np0005531754 podman[283996]: 2025-11-22 06:04:13.783548922 +0000 UTC m=+0.195884813 container start 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:04:13 np0005531754 podman[283996]: 2025-11-22 06:04:13.792971015 +0000 UTC m=+0.205306906 container attach 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:14 np0005531754 eager_ellis[284012]: {
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_id": 1,
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "type": "bluestore"
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    },
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_id": 2,
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "type": "bluestore"
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    },
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_id": 0,
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:        "type": "bluestore"
Nov 22 01:04:14 np0005531754 eager_ellis[284012]:    }
Nov 22 01:04:14 np0005531754 eager_ellis[284012]: }
Nov 22 01:04:14 np0005531754 systemd[1]: libpod-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Deactivated successfully.
Nov 22 01:04:14 np0005531754 podman[283996]: 2025-11-22 06:04:14.892007229 +0000 UTC m=+1.304343140 container died 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 01:04:14 np0005531754 systemd[1]: libpod-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Consumed 1.117s CPU time.
Nov 22 01:04:14 np0005531754 systemd[1]: var-lib-containers-storage-overlay-1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283-merged.mount: Deactivated successfully.
Nov 22 01:04:14 np0005531754 podman[283996]: 2025-11-22 06:04:14.947568805 +0000 UTC m=+1.359904656 container remove 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:04:14 np0005531754 systemd[1]: libpod-conmon-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Deactivated successfully.
Nov 22 01:04:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:04:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:04:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:15 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 23417d8c-0ca7-4982-8541-dfbe674a1eb4 does not exist
Nov 22 01:04:15 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9794301b-9e76-467f-9578-7f3b5624a2ce does not exist
Nov 22 01:04:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:04:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:18 np0005531754 podman[284108]: 2025-11-22 06:04:18.225363723 +0000 UTC m=+0.076477837 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 22 01:04:18 np0005531754 podman[284107]: 2025-11-22 06:04:18.235835864 +0000 UTC m=+0.083769393 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 01:04:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.200 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.200 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:04:20 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:04:20 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599743546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.634 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.828 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.830 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4952MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.830 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.831 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.900 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.901 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:04:20 np0005531754 nova_compute[255660]: 2025-11-22 06:04:20.917 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:04:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:21 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:04:21 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895548595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:04:21 np0005531754 nova_compute[255660]: 2025-11-22 06:04:21.408 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:04:21 np0005531754 nova_compute[255660]: 2025-11-22 06:04:21.415 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:04:21 np0005531754 nova_compute[255660]: 2025-11-22 06:04:21.427 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:04:21 np0005531754 nova_compute[255660]: 2025-11-22 06:04:21.429 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:04:21 np0005531754 nova_compute[255660]: 2025-11-22 06:04:21.430 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:04:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:29 np0005531754 nova_compute[255660]: 2025-11-22 06:04:29.427 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:29 np0005531754 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:29 np0005531754 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:29 np0005531754 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:04:30 np0005531754 nova_compute[255660]: 2025-11-22 06:04:30.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:32 np0005531754 nova_compute[255660]: 2025-11-22 06:04:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:32 np0005531754 nova_compute[255660]: 2025-11-22 06:04:32.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:33 np0005531754 nova_compute[255660]: 2025-11-22 06:04:33.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:04:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.946 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:04:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.946 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:04:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:37 np0005531754 nova_compute[255660]: 2025-11-22 06:04:37.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:04:37 np0005531754 nova_compute[255660]: 2025-11-22 06:04:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:04:37 np0005531754 nova_compute[255660]: 2025-11-22 06:04:37.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:04:37 np0005531754 nova_compute[255660]: 2025-11-22 06:04:37.160 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:04:38 np0005531754 podman[284188]: 2025-11-22 06:04:38.259515742 +0000 UTC m=+0.120160707 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 01:04:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:04:43
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:04:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:04:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:04:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:04:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:04:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:04:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:04:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:49 np0005531754 podman[284215]: 2025-11-22 06:04:49.236098753 +0000 UTC m=+0.081333098 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 01:04:49 np0005531754 podman[284214]: 2025-11-22 06:04:49.236177635 +0000 UTC m=+0.094498991 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 01:04:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.126785) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493126829, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2408, "num_deletes": 510, "total_data_size": 3487732, "memory_usage": 3560592, "flush_reason": "Manual Compaction"}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493153144, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3431555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28627, "largest_seqno": 31034, "table_properties": {"data_size": 3420894, "index_size": 6323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25700, "raw_average_key_size": 19, "raw_value_size": 3397164, "raw_average_value_size": 2623, "num_data_blocks": 279, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791275, "oldest_key_time": 1763791275, "file_creation_time": 1763791493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 26440 microseconds, and 13944 cpu microseconds.
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.153220) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3431555 bytes OK
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.153248) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155188) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155208) EVENT_LOG_v1 {"time_micros": 1763791493155201, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3476390, prev total WAL file size 3476390, number of live WAL files 2.
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.156861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3351KB)], [62(8237KB)]
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493156903, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11866657, "oldest_snapshot_seqno": -1}
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:04:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6097 keys, 10192949 bytes, temperature: kUnknown
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493237443, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10192949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10149656, "index_size": 26927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 153763, "raw_average_key_size": 25, "raw_value_size": 10037910, "raw_average_value_size": 1646, "num_data_blocks": 1101, "num_entries": 6097, "num_filter_entries": 6097, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.237855) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10192949 bytes
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.240684) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 7133, records dropped: 1036 output_compression: NoCompression
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.240714) EVENT_LOG_v1 {"time_micros": 1763791493240700, "job": 34, "event": "compaction_finished", "compaction_time_micros": 80667, "compaction_time_cpu_micros": 20471, "output_level": 6, "num_output_files": 1, "total_output_size": 10192949, "num_input_records": 7133, "num_output_records": 6097, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493241870, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493244643, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.156768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:04:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:04:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:04:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:09 np0005531754 podman[284255]: 2025-11-22 06:05:09.254603914 +0000 UTC m=+0.112353238 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:16 np0005531754 nova_compute[255660]: 2025-11-22 06:05:16.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:16 np0005531754 nova_compute[255660]: 2025-11-22 06:05:16.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 01:05:16 np0005531754 nova_compute[255660]: 2025-11-22 06:05:16.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:16 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 477595a4-66f0-4d44-9a75-733de6f74c5a does not exist
Nov 22 01:05:16 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev b7df0aa4-7142-45a6-9890-5007a28f93f9 does not exist
Nov 22 01:05:16 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev b91d56ad-89ec-42f3-9fc5-6642b37343a7 does not exist
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:16 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:05:16 np0005531754 podman[284553]: 2025-11-22 06:05:16.945769906 +0000 UTC m=+0.073768794 container create 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 01:05:16 np0005531754 systemd[1]: Started libpod-conmon-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope.
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:16.915323291 +0000 UTC m=+0.043322219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:17 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:17.045538436 +0000 UTC m=+0.173537304 container init 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:17.058082252 +0000 UTC m=+0.186081100 container start 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:17.063159248 +0000 UTC m=+0.191158186 container attach 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 01:05:17 np0005531754 mystifying_joliot[284569]: 167 167
Nov 22 01:05:17 np0005531754 systemd[1]: libpod-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope: Deactivated successfully.
Nov 22 01:05:17 np0005531754 conmon[284569]: conmon 46782d17abf944dd53bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope/container/memory.events
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:17.066509597 +0000 UTC m=+0.194508485 container died 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 01:05:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:17 np0005531754 systemd[1]: var-lib-containers-storage-overlay-60d64d5658c58d7d6830acdcf093e30ed2268d2b92df87409b3cbe5df661bd60-merged.mount: Deactivated successfully.
Nov 22 01:05:17 np0005531754 podman[284553]: 2025-11-22 06:05:17.120656157 +0000 UTC m=+0.248655015 container remove 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 01:05:17 np0005531754 systemd[1]: libpod-conmon-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope: Deactivated successfully.
Nov 22 01:05:17 np0005531754 podman[284593]: 2025-11-22 06:05:17.323610049 +0000 UTC m=+0.048087098 container create 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:17 np0005531754 systemd[1]: Started libpod-conmon-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope.
Nov 22 01:05:17 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:17 np0005531754 podman[284593]: 2025-11-22 06:05:17.303596163 +0000 UTC m=+0.028073232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:17 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:17 np0005531754 podman[284593]: 2025-11-22 06:05:17.419052213 +0000 UTC m=+0.143529342 container init 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:17 np0005531754 podman[284593]: 2025-11-22 06:05:17.425715651 +0000 UTC m=+0.150192690 container start 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 01:05:17 np0005531754 podman[284593]: 2025-11-22 06:05:17.429528244 +0000 UTC m=+0.154005313 container attach 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 01:05:18 np0005531754 exciting_cray[284609]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:05:18 np0005531754 exciting_cray[284609]: --> relative data size: 1.0
Nov 22 01:05:18 np0005531754 exciting_cray[284609]: --> All data devices are unavailable
Nov 22 01:05:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:18 np0005531754 systemd[1]: libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Deactivated successfully.
Nov 22 01:05:18 np0005531754 systemd[1]: libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Consumed 1.019s CPU time.
Nov 22 01:05:18 np0005531754 conmon[284609]: conmon 0668bc39968635d30291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope/container/memory.events
Nov 22 01:05:18 np0005531754 podman[284593]: 2025-11-22 06:05:18.485550906 +0000 UTC m=+1.210027975 container died 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:18 np0005531754 systemd[1]: var-lib-containers-storage-overlay-899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c-merged.mount: Deactivated successfully.
Nov 22 01:05:18 np0005531754 podman[284593]: 2025-11-22 06:05:18.536545501 +0000 UTC m=+1.261022570 container remove 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:05:18 np0005531754 systemd[1]: libpod-conmon-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Deactivated successfully.
Nov 22 01:05:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.264225216 +0000 UTC m=+0.050749989 container create 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 01:05:19 np0005531754 systemd[1]: Started libpod-conmon-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope.
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.240626904 +0000 UTC m=+0.027151747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.361232502 +0000 UTC m=+0.147757295 container init 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.371658952 +0000 UTC m=+0.158183735 container start 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 01:05:19 np0005531754 elastic_mendel[284818]: 167 167
Nov 22 01:05:19 np0005531754 systemd[1]: libpod-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope: Deactivated successfully.
Nov 22 01:05:19 np0005531754 conmon[284818]: conmon 9752bcde6197a01a6e9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope/container/memory.events
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.378417643 +0000 UTC m=+0.164942426 container attach 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.379049519 +0000 UTC m=+0.165574302 container died 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 01:05:19 np0005531754 podman[284812]: 2025-11-22 06:05:19.389698775 +0000 UTC m=+0.068002042 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:19 np0005531754 podman[284810]: 2025-11-22 06:05:19.392957201 +0000 UTC m=+0.082732735 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 22 01:05:19 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9a100f25fd2857b653c30674d7db25ee16024e8e290b950826861ed4207a682c-merged.mount: Deactivated successfully.
Nov 22 01:05:19 np0005531754 podman[284794]: 2025-11-22 06:05:19.415662409 +0000 UTC m=+0.202187172 container remove 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:05:19 np0005531754 systemd[1]: libpod-conmon-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope: Deactivated successfully.
Nov 22 01:05:19 np0005531754 podman[284874]: 2025-11-22 06:05:19.60365156 +0000 UTC m=+0.056538694 container create 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 01:05:19 np0005531754 systemd[1]: Started libpod-conmon-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope.
Nov 22 01:05:19 np0005531754 podman[284874]: 2025-11-22 06:05:19.575133447 +0000 UTC m=+0.028020641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:19 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:19 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:19 np0005531754 podman[284874]: 2025-11-22 06:05:19.714325353 +0000 UTC m=+0.167212547 container init 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:05:19 np0005531754 podman[284874]: 2025-11-22 06:05:19.725035359 +0000 UTC m=+0.177922493 container start 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 01:05:19 np0005531754 podman[284874]: 2025-11-22 06:05:19.728916183 +0000 UTC m=+0.181803327 container attach 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]: {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    "0": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "devices": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "/dev/loop3"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            ],
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_name": "ceph_lv0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_size": "21470642176",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "name": "ceph_lv0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "tags": {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_name": "ceph",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.crush_device_class": "",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.encrypted": "0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_id": "0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.vdo": "0"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            },
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "vg_name": "ceph_vg0"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        }
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    ],
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    "1": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "devices": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "/dev/loop4"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            ],
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_name": "ceph_lv1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_size": "21470642176",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "name": "ceph_lv1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "tags": {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_name": "ceph",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.crush_device_class": "",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.encrypted": "0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_id": "1",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.vdo": "0"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            },
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "vg_name": "ceph_vg1"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        }
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    ],
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    "2": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "devices": [
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "/dev/loop5"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            ],
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_name": "ceph_lv2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_size": "21470642176",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "name": "ceph_lv2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "tags": {
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:05:20 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:20 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.cluster_name": "ceph",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.crush_device_class": "",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.encrypted": "0",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osd_id": "2",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:                "ceph.vdo": "0"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            },
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "type": "block",
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:            "vg_name": "ceph_vg2"
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:        }
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]:    ]
Nov 22 01:05:20 np0005531754 hopeful_blackburn[284892]: }
Nov 22 01:05:20 np0005531754 systemd[1]: libpod-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope: Deactivated successfully.
Nov 22 01:05:20 np0005531754 podman[284874]: 2025-11-22 06:05:20.515552336 +0000 UTC m=+0.968439480 container died 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 01:05:20 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11-merged.mount: Deactivated successfully.
Nov 22 01:05:20 np0005531754 podman[284874]: 2025-11-22 06:05:20.602824101 +0000 UTC m=+1.055711245 container remove 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:05:20 np0005531754 systemd[1]: libpod-conmon-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope: Deactivated successfully.
Nov 22 01:05:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.389201425 +0000 UTC m=+0.035778564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.539819528 +0000 UTC m=+0.186396667 container create 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:05:21 np0005531754 systemd[1]: Started libpod-conmon-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope.
Nov 22 01:05:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.665543082 +0000 UTC m=+0.312120221 container init 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.676063816 +0000 UTC m=+0.322640965 container start 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.681029539 +0000 UTC m=+0.327606668 container attach 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 01:05:21 np0005531754 gifted_bhabha[285073]: 167 167
Nov 22 01:05:21 np0005531754 systemd[1]: libpod-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope: Deactivated successfully.
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.684699078 +0000 UTC m=+0.331276247 container died 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 01:05:21 np0005531754 systemd[1]: var-lib-containers-storage-overlay-23ccf179ed92f886c22a143c406d7b9bd7fd6c082825cf3b6d8f92a3c1447efa-merged.mount: Deactivated successfully.
Nov 22 01:05:21 np0005531754 podman[285056]: 2025-11-22 06:05:21.728720532 +0000 UTC m=+0.375297651 container remove 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:05:21 np0005531754 systemd[1]: libpod-conmon-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope: Deactivated successfully.
Nov 22 01:05:21 np0005531754 podman[285097]: 2025-11-22 06:05:21.929599999 +0000 UTC m=+0.051609230 container create 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:05:21 np0005531754 systemd[1]: Started libpod-conmon-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope.
Nov 22 01:05:21 np0005531754 podman[285097]: 2025-11-22 06:05:21.9069567 +0000 UTC m=+0.028965941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:05:21 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:05:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:22 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:05:22 np0005531754 podman[285097]: 2025-11-22 06:05:22.017550717 +0000 UTC m=+0.139559978 container init 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:05:22 np0005531754 podman[285097]: 2025-11-22 06:05:22.031181223 +0000 UTC m=+0.153190444 container start 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:05:22 np0005531754 podman[285097]: 2025-11-22 06:05:22.035074588 +0000 UTC m=+0.157083849 container attach 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.148 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.172 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.173 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.173 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.174 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.174 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:05:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:05:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90430756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.641 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.867 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.869 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4881MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.870 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.870 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.945 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.946 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:05:22 np0005531754 nova_compute[255660]: 2025-11-22 06:05:22.969 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]: {
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_id": 1,
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "type": "bluestore"
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    },
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_id": 2,
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "type": "bluestore"
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    },
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_id": 0,
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:        "type": "bluestore"
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]:    }
Nov 22 01:05:23 np0005531754 affectionate_buck[285113]: }
Nov 22 01:05:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:23 np0005531754 systemd[1]: libpod-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Deactivated successfully.
Nov 22 01:05:23 np0005531754 systemd[1]: libpod-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Consumed 1.015s CPU time.
Nov 22 01:05:23 np0005531754 podman[285097]: 2025-11-22 06:05:23.082281642 +0000 UTC m=+1.204290923 container died 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 01:05:23 np0005531754 systemd[1]: var-lib-containers-storage-overlay-44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9-merged.mount: Deactivated successfully.
Nov 22 01:05:23 np0005531754 podman[285097]: 2025-11-22 06:05:23.162025619 +0000 UTC m=+1.284034870 container remove 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 01:05:23 np0005531754 systemd[1]: libpod-conmon-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Deactivated successfully.
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:23 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8d06012a-5e32-449e-bda6-89961324dc68 does not exist
Nov 22 01:05:23 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9df89602-923c-406e-a568-d05b605db1cd does not exist
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108723370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:05:23 np0005531754 nova_compute[255660]: 2025-11-22 06:05:23.400 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:05:23 np0005531754 nova_compute[255660]: 2025-11-22 06:05:23.410 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:05:23 np0005531754 nova_compute[255660]: 2025-11-22 06:05:23.428 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:05:23 np0005531754 nova_compute[255660]: 2025-11-22 06:05:23.432 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:05:23 np0005531754 nova_compute[255660]: 2025-11-22 06:05:23.433 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:23 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:05:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:29 np0005531754 nova_compute[255660]: 2025-11-22 06:05:29.411 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:29 np0005531754 nova_compute[255660]: 2025-11-22 06:05:29.690 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:30 np0005531754 nova_compute[255660]: 2025-11-22 06:05:30.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:30 np0005531754 nova_compute[255660]: 2025-11-22 06:05:30.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:30 np0005531754 nova_compute[255660]: 2025-11-22 06:05:30.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:05:30 np0005531754 nova_compute[255660]: 2025-11-22 06:05:30.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:32 np0005531754 nova_compute[255660]: 2025-11-22 06:05:32.288 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:32 np0005531754 nova_compute[255660]: 2025-11-22 06:05:32.289 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:33 np0005531754 nova_compute[255660]: 2025-11-22 06:05:33.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:33 np0005531754 nova_compute[255660]: 2025-11-22 06:05:33.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 01:05:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:34 np0005531754 nova_compute[255660]: 2025-11-22 06:05:34.218 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:34 np0005531754 nova_compute[255660]: 2025-11-22 06:05:34.417 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:35 np0005531754 nova_compute[255660]: 2025-11-22 06:05:35.301 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:05:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:05:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:05:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.860 255664 DEBUG oslo_concurrency.processutils [None req-f24b0f1e-644b-420a-9281-37ac9adaadd4 044d47622a784618a29823cd785e2e31 5830132eb4c840bd906214f1719ec76f - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:05:37 np0005531754 nova_compute[255660]: 2025-11-22 06:05:37.879 255664 DEBUG oslo_concurrency.processutils [None req-f24b0f1e-644b-420a-9281-37ac9adaadd4 044d47622a784618a29823cd785e2e31 5830132eb4c840bd906214f1719ec76f - - default default] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:05:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:40 np0005531754 podman[285256]: 2025-11-22 06:05:40.240256329 +0000 UTC m=+0.099593562 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 01:05:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:05:42 np0005531754 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 9147 writes, 34K keys, 9147 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9147 writes, 2199 syncs, 4.16 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1822 writes, 5236 keys, 1822 commit groups, 1.0 writes per commit group, ingest: 6.79 MB, 0.01 MB/s#012Interval WAL: 1822 writes, 656 syncs, 2.78 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:05:43
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:05:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:05:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:05:44 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:44.671 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 01:05:44 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:44.672 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 01:05:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:45 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:05:45.674 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 01:05:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:05:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:05:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:05:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:05:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:05:47 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3213 syncs, 3.78 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3014 writes, 9902 keys, 3014 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s#012Interval WAL: 3014 writes, 1129 syncs, 2.67 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:05:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:50 np0005531754 podman[285282]: 2025-11-22 06:05:50.230968006 +0000 UTC m=+0.077647491 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 01:05:50 np0005531754 podman[285283]: 2025-11-22 06:05:50.258660092 +0000 UTC m=+0.098062590 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 01:05:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:05:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:05:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:05:53 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2769 syncs, 3.81 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1633 writes, 3817 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 2.01 MB, 0.00 MB/s#012Interval WAL: 1633 writes, 528 syncs, 3.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:05:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:56 np0005531754 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 01:05:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:05:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:05:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:11 np0005531754 podman[285319]: 2025-11-22 06:06:11.304327131 +0000 UTC m=+0.150672647 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:21 np0005531754 podman[285345]: 2025-11-22 06:06:21.212053496 +0000 UTC m=+0.069054110 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 01:06:21 np0005531754 podman[285346]: 2025-11-22 06:06:21.245797244 +0000 UTC m=+0.087879827 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.169 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.171 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:06:22 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:06:22 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999093216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.647 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.846 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.847 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4957MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.848 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:06:22 np0005531754 nova_compute[255660]: 2025-11-22 06:06:22.848 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.057 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.058 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:06:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.168 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.296 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.297 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.316 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.351 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.374 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:06:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:06:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836846607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.849 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.856 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.880 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.885 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:06:23 np0005531754 nova_compute[255660]: 2025-11-22 06:06:23.886 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:24 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev cab87328-7ee6-4d4b-a410-a01ea3947ae5 does not exist
Nov 22 01:06:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 81bd2fab-d606-46bc-ae8e-5e278710b98d does not exist
Nov 22 01:06:25 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev e4d9c633-e346-4c1c-a2c3-edbb778a6a1c does not exist
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:25 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.006200935 +0000 UTC m=+0.072443592 container create 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:06:26 np0005531754 systemd[1]: Started libpod-conmon-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope.
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:25.978819917 +0000 UTC m=+0.045062674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:26 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.103050851 +0000 UTC m=+0.169293598 container init 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.114522769 +0000 UTC m=+0.180765466 container start 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.11864466 +0000 UTC m=+0.184887417 container attach 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 01:06:26 np0005531754 nostalgic_blackwell[285834]: 167 167
Nov 22 01:06:26 np0005531754 systemd[1]: libpod-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope: Deactivated successfully.
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.122732771 +0000 UTC m=+0.188975438 container died 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 01:06:26 np0005531754 systemd[1]: var-lib-containers-storage-overlay-bc73c0129d73db71efa9c3cd3cf319d09c649a26ffea31ca531fc1c17c496a42-merged.mount: Deactivated successfully.
Nov 22 01:06:26 np0005531754 podman[285817]: 2025-11-22 06:06:26.174442942 +0000 UTC m=+0.240685629 container remove 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 01:06:26 np0005531754 systemd[1]: libpod-conmon-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope: Deactivated successfully.
Nov 22 01:06:26 np0005531754 podman[285858]: 2025-11-22 06:06:26.393156599 +0000 UTC m=+0.073984972 container create 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 01:06:26 np0005531754 systemd[1]: Started libpod-conmon-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope.
Nov 22 01:06:26 np0005531754 podman[285858]: 2025-11-22 06:06:26.367407756 +0000 UTC m=+0.048236199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:26 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:26 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:26 np0005531754 podman[285858]: 2025-11-22 06:06:26.500724014 +0000 UTC m=+0.181552437 container init 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:06:26 np0005531754 podman[285858]: 2025-11-22 06:06:26.514435503 +0000 UTC m=+0.195263876 container start 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 01:06:26 np0005531754 podman[285858]: 2025-11-22 06:06:26.522811949 +0000 UTC m=+0.203640382 container attach 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:06:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:27 np0005531754 festive_bell[285874]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:06:27 np0005531754 festive_bell[285874]: --> relative data size: 1.0
Nov 22 01:06:27 np0005531754 festive_bell[285874]: --> All data devices are unavailable
Nov 22 01:06:27 np0005531754 systemd[1]: libpod-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Deactivated successfully.
Nov 22 01:06:27 np0005531754 podman[285858]: 2025-11-22 06:06:27.626263307 +0000 UTC m=+1.307091670 container died 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 01:06:27 np0005531754 systemd[1]: libpod-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Consumed 1.066s CPU time.
Nov 22 01:06:27 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c-merged.mount: Deactivated successfully.
Nov 22 01:06:27 np0005531754 podman[285858]: 2025-11-22 06:06:27.691332858 +0000 UTC m=+1.372161201 container remove 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 01:06:27 np0005531754 systemd[1]: libpod-conmon-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Deactivated successfully.
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.398595364 +0000 UTC m=+0.045963179 container create ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:06:28 np0005531754 systemd[1]: Started libpod-conmon-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope.
Nov 22 01:06:28 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.380140476 +0000 UTC m=+0.027508301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.48169199 +0000 UTC m=+0.129059795 container init ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 01:06:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.492171982 +0000 UTC m=+0.139539797 container start ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.495482551 +0000 UTC m=+0.142850356 container attach ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 01:06:28 np0005531754 systemd[1]: libpod-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope: Deactivated successfully.
Nov 22 01:06:28 np0005531754 objective_leakey[286075]: 167 167
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.499015946 +0000 UTC m=+0.146383761 container died ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:06:28 np0005531754 systemd[1]: var-lib-containers-storage-overlay-761290aa060ba6a09612d6c375b5f465e203076c298d8c7da7b2ad47552b4248-merged.mount: Deactivated successfully.
Nov 22 01:06:28 np0005531754 podman[286059]: 2025-11-22 06:06:28.555831285 +0000 UTC m=+0.203199130 container remove ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:06:28 np0005531754 systemd[1]: libpod-conmon-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope: Deactivated successfully.
Nov 22 01:06:28 np0005531754 podman[286098]: 2025-11-22 06:06:28.774329876 +0000 UTC m=+0.067369914 container create ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 01:06:28 np0005531754 systemd[1]: Started libpod-conmon-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope.
Nov 22 01:06:28 np0005531754 podman[286098]: 2025-11-22 06:06:28.746555118 +0000 UTC m=+0.039595216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:28 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:28 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:28 np0005531754 podman[286098]: 2025-11-22 06:06:28.872108068 +0000 UTC m=+0.165148146 container init ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 01:06:28 np0005531754 podman[286098]: 2025-11-22 06:06:28.883442482 +0000 UTC m=+0.176482530 container start ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:06:28 np0005531754 podman[286098]: 2025-11-22 06:06:28.88890852 +0000 UTC m=+0.181948558 container attach ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 22 01:06:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]: {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    "0": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "devices": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "/dev/loop3"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            ],
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_name": "ceph_lv0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_size": "21470642176",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "name": "ceph_lv0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "tags": {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_name": "ceph",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.crush_device_class": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.encrypted": "0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_id": "0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.vdo": "0"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            },
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "vg_name": "ceph_vg0"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        }
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    ],
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    "1": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "devices": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "/dev/loop4"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            ],
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_name": "ceph_lv1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_size": "21470642176",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "name": "ceph_lv1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "tags": {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_name": "ceph",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.crush_device_class": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.encrypted": "0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_id": "1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.vdo": "0"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            },
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "vg_name": "ceph_vg1"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        }
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    ],
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    "2": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "devices": [
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "/dev/loop5"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            ],
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_name": "ceph_lv2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_size": "21470642176",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "name": "ceph_lv2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "tags": {
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.cluster_name": "ceph",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.crush_device_class": "",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.encrypted": "0",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osd_id": "2",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:                "ceph.vdo": "0"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            },
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "type": "block",
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:            "vg_name": "ceph_vg2"
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:        }
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]:    ]
Nov 22 01:06:29 np0005531754 elastic_hellman[286114]: }
Nov 22 01:06:29 np0005531754 systemd[1]: libpod-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope: Deactivated successfully.
Nov 22 01:06:29 np0005531754 podman[286098]: 2025-11-22 06:06:29.693404771 +0000 UTC m=+0.986444789 container died ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:06:29 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179-merged.mount: Deactivated successfully.
Nov 22 01:06:29 np0005531754 podman[286098]: 2025-11-22 06:06:29.78101582 +0000 UTC m=+1.074055838 container remove ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:06:29 np0005531754 systemd[1]: libpod-conmon-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope: Deactivated successfully.
Nov 22 01:06:29 np0005531754 nova_compute[255660]: 2025-11-22 06:06:29.887 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.600462314 +0000 UTC m=+0.052411961 container create 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 01:06:30 np0005531754 systemd[1]: Started libpod-conmon-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope.
Nov 22 01:06:30 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.572622845 +0000 UTC m=+0.024572552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.683339924 +0000 UTC m=+0.135289561 container init 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.693532599 +0000 UTC m=+0.145482216 container start 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.697213028 +0000 UTC m=+0.149162735 container attach 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 01:06:30 np0005531754 sleepy_nash[286291]: 167 167
Nov 22 01:06:30 np0005531754 systemd[1]: libpod-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope: Deactivated successfully.
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.700548058 +0000 UTC m=+0.152497685 container died 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:06:30 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0191847ed57a93b7dd3737107650b20c1d241cd6f07fd8c2bc3ca5cec98e0a37-merged.mount: Deactivated successfully.
Nov 22 01:06:30 np0005531754 podman[286275]: 2025-11-22 06:06:30.74001272 +0000 UTC m=+0.191962377 container remove 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 01:06:30 np0005531754 systemd[1]: libpod-conmon-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope: Deactivated successfully.
Nov 22 01:06:30 np0005531754 podman[286314]: 2025-11-22 06:06:30.961171912 +0000 UTC m=+0.061400473 container create 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:06:31 np0005531754 systemd[1]: Started libpod-conmon-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope.
Nov 22 01:06:31 np0005531754 podman[286314]: 2025-11-22 06:06:30.936750415 +0000 UTC m=+0.036979056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:06:31 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:06:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:31 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:06:31 np0005531754 podman[286314]: 2025-11-22 06:06:31.057788523 +0000 UTC m=+0.158017134 container init 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:06:31 np0005531754 podman[286314]: 2025-11-22 06:06:31.072953361 +0000 UTC m=+0.173181942 container start 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 01:06:31 np0005531754 podman[286314]: 2025-11-22 06:06:31.077681958 +0000 UTC m=+0.177910549 container attach 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:06:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:31 np0005531754 nova_compute[255660]: 2025-11-22 06:06:31.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:31 np0005531754 nova_compute[255660]: 2025-11-22 06:06:31.127 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:31 np0005531754 nova_compute[255660]: 2025-11-22 06:06:31.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:06:32 np0005531754 nova_compute[255660]: 2025-11-22 06:06:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]: {
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_id": 1,
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "type": "bluestore"
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    },
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_id": 2,
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "type": "bluestore"
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    },
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_id": 0,
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:        "type": "bluestore"
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]:    }
Nov 22 01:06:32 np0005531754 agitated_hodgkin[286331]: }
Nov 22 01:06:32 np0005531754 systemd[1]: libpod-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Deactivated successfully.
Nov 22 01:06:32 np0005531754 systemd[1]: libpod-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Consumed 1.150s CPU time.
Nov 22 01:06:32 np0005531754 podman[286364]: 2025-11-22 06:06:32.252807615 +0000 UTC m=+0.027879802 container died 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:06:32 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8-merged.mount: Deactivated successfully.
Nov 22 01:06:32 np0005531754 podman[286364]: 2025-11-22 06:06:32.31280045 +0000 UTC m=+0.087872647 container remove 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 01:06:32 np0005531754 systemd[1]: libpod-conmon-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Deactivated successfully.
Nov 22 01:06:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:06:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:32 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:06:32 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:32 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c3ce0438-ed29-4266-ae65-a56a28c5bac2 does not exist
Nov 22 01:06:32 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 3dc6c435-28e8-462c-9f94-79e4b8a37618 does not exist
Nov 22 01:06:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:33 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:06:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:34 np0005531754 nova_compute[255660]: 2025-11-22 06:06:34.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:35 np0005531754 nova_compute[255660]: 2025-11-22 06:06:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:35 np0005531754 nova_compute[255660]: 2025-11-22 06:06:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.948 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:06:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.949 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:06:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.949 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:06:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:39 np0005531754 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:06:39 np0005531754 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:06:39 np0005531754 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:06:39 np0005531754 nova_compute[255660]: 2025-11-22 06:06:39.151 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:06:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:42 np0005531754 podman[286429]: 2025-11-22 06:06:42.29393287 +0000 UTC m=+0.134678595 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:06:43
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta']
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:06:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:06:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:06:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:06:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:06:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:06:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:06:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:52 np0005531754 podman[286456]: 2025-11-22 06:06:52.376877081 +0000 UTC m=+0.083485607 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 01:06:52 np0005531754 podman[286457]: 2025-11-22 06:06:52.418194393 +0000 UTC m=+0.117072472 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:06:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:06:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:06:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:06:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.501127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623501191, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1286, "num_deletes": 250, "total_data_size": 1982824, "memory_usage": 2016104, "flush_reason": "Manual Compaction"}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623515276, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1166888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31035, "largest_seqno": 32320, "table_properties": {"data_size": 1162319, "index_size": 2029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12080, "raw_average_key_size": 20, "raw_value_size": 1152293, "raw_average_value_size": 1969, "num_data_blocks": 93, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791494, "oldest_key_time": 1763791494, "file_creation_time": 1763791623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14208 microseconds, and 7655 cpu microseconds.
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.515339) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1166888 bytes OK
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.515365) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517217) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517238) EVENT_LOG_v1 {"time_micros": 1763791623517231, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1977052, prev total WAL file size 1977052, number of live WAL files 2.
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.518330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1139KB)], [65(9954KB)]
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623518369, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11359837, "oldest_snapshot_seqno": -1}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6229 keys, 8835159 bytes, temperature: kUnknown
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623580162, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8835159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8794059, "index_size": 24414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 156630, "raw_average_key_size": 25, "raw_value_size": 8682984, "raw_average_value_size": 1393, "num_data_blocks": 1001, "num_entries": 6229, "num_filter_entries": 6229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.580458) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8835159 bytes
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.581917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.6 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(17.3) write-amplify(7.6) OK, records in: 6682, records dropped: 453 output_compression: NoCompression
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.581946) EVENT_LOG_v1 {"time_micros": 1763791623581932, "job": 36, "event": "compaction_finished", "compaction_time_micros": 61882, "compaction_time_cpu_micros": 39999, "output_level": 6, "num_output_files": 1, "total_output_size": 8835159, "num_input_records": 6682, "num_output_records": 6229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623582430, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623585756, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.518275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:03 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:07:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:13 np0005531754 podman[286497]: 2025-11-22 06:07:13.270755956 +0000 UTC m=+0.125829938 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 01:07:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 22 01:07:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 22 01:07:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.167 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:07:23 np0005531754 podman[286524]: 2025-11-22 06:07:23.198451229 +0000 UTC m=+0.053744717 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 01:07:23 np0005531754 podman[286525]: 2025-11-22 06:07:23.243571775 +0000 UTC m=+0.084683761 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 22 01:07:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:07:23 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3293739466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.572 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.746 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.747 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.748 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.748 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.832 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.833 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:07:23 np0005531754 nova_compute[255660]: 2025-11-22 06:07:23.854 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:07:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:07:24 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942960877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:07:24 np0005531754 nova_compute[255660]: 2025-11-22 06:07:24.294 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:07:24 np0005531754 nova_compute[255660]: 2025-11-22 06:07:24.300 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:07:24 np0005531754 nova_compute[255660]: 2025-11-22 06:07:24.337 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:07:24 np0005531754 nova_compute[255660]: 2025-11-22 06:07:24.340 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:07:24 np0005531754 nova_compute[255660]: 2025-11-22 06:07:24.341 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:07:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 01:07:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 01:07:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 22 01:07:29 np0005531754 nova_compute[255660]: 2025-11-22 06:07:29.342 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 01:07:32 np0005531754 nova_compute[255660]: 2025-11-22 06:07:32.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:32 np0005531754 nova_compute[255660]: 2025-11-22 06:07:32.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:33 np0005531754 nova_compute[255660]: 2025-11-22 06:07:33.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:33 np0005531754 nova_compute[255660]: 2025-11-22 06:07:33.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:07:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 64058299-def9-4639-a731-5a572996b6d9 does not exist
Nov 22 01:07:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 8981e949-e5ca-45db-add6-080fb7bc4736 does not exist
Nov 22 01:07:33 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev a4211a8a-df16-4701-a55d-a0b0104813dd does not exist
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:07:33 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:07:34 np0005531754 nova_compute[255660]: 2025-11-22 06:07:34.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:34 np0005531754 nova_compute[255660]: 2025-11-22 06:07:34.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 01:07:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:07:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:34 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.449369606 +0000 UTC m=+0.105555312 container create 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.384936712 +0000 UTC m=+0.041122468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:34 np0005531754 systemd[1]: Started libpod-conmon-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope.
Nov 22 01:07:34 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.761010924 +0000 UTC m=+0.417196680 container init 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.774030224 +0000 UTC m=+0.430215930 container start 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 01:07:34 np0005531754 clever_keldysh[286893]: 167 167
Nov 22 01:07:34 np0005531754 systemd[1]: libpod-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope: Deactivated successfully.
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.808616045 +0000 UTC m=+0.464801811 container attach 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:07:34 np0005531754 podman[286877]: 2025-11-22 06:07:34.810723891 +0000 UTC m=+0.466909597 container died 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 01:07:35 np0005531754 nova_compute[255660]: 2025-11-22 06:07:35.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:35 np0005531754 systemd[1]: var-lib-containers-storage-overlay-82a7c48b9d92346edc63f62565474e77c18a5b70b6213243e6f5b99f70681096-merged.mount: Deactivated successfully.
Nov 22 01:07:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:35 np0005531754 podman[286877]: 2025-11-22 06:07:35.462824602 +0000 UTC m=+1.119010308 container remove 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 01:07:35 np0005531754 systemd[1]: libpod-conmon-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope: Deactivated successfully.
Nov 22 01:07:35 np0005531754 podman[286920]: 2025-11-22 06:07:35.666868283 +0000 UTC m=+0.036192024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:35 np0005531754 podman[286920]: 2025-11-22 06:07:35.823674623 +0000 UTC m=+0.192998384 container create 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:07:35 np0005531754 systemd[1]: Started libpod-conmon-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope.
Nov 22 01:07:35 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:35 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:36 np0005531754 podman[286920]: 2025-11-22 06:07:36.062613185 +0000 UTC m=+0.431937036 container init 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 01:07:36 np0005531754 podman[286920]: 2025-11-22 06:07:36.07317741 +0000 UTC m=+0.442501161 container start 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:07:36 np0005531754 nova_compute[255660]: 2025-11-22 06:07:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:36 np0005531754 podman[286920]: 2025-11-22 06:07:36.16757561 +0000 UTC m=+0.536899371 container attach 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:07:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.950 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:07:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:07:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:07:37 np0005531754 relaxed_tharp[286936]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:07:37 np0005531754 relaxed_tharp[286936]: --> relative data size: 1.0
Nov 22 01:07:37 np0005531754 relaxed_tharp[286936]: --> All data devices are unavailable
Nov 22 01:07:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:37 np0005531754 systemd[1]: libpod-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Deactivated successfully.
Nov 22 01:07:37 np0005531754 systemd[1]: libpod-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Consumed 1.053s CPU time.
Nov 22 01:07:37 np0005531754 podman[286965]: 2025-11-22 06:07:37.236298073 +0000 UTC m=+0.046088712 container died 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:07:37 np0005531754 systemd[1]: var-lib-containers-storage-overlay-19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7-merged.mount: Deactivated successfully.
Nov 22 01:07:37 np0005531754 podman[286965]: 2025-11-22 06:07:37.30494578 +0000 UTC m=+0.114736459 container remove 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 01:07:37 np0005531754 systemd[1]: libpod-conmon-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Deactivated successfully.
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.061343509 +0000 UTC m=+0.059475953 container create 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:07:38 np0005531754 systemd[1]: Started libpod-conmon-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope.
Nov 22 01:07:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.041650639 +0000 UTC m=+0.039783083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.138966337 +0000 UTC m=+0.137098781 container init 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.148032332 +0000 UTC m=+0.146164766 container start 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.151718451 +0000 UTC m=+0.149850885 container attach 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:07:38 np0005531754 systemd[1]: libpod-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope: Deactivated successfully.
Nov 22 01:07:38 np0005531754 determined_nash[287137]: 167 167
Nov 22 01:07:38 np0005531754 conmon[287137]: conmon 00a0fa0788957f67c489 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope/container/memory.events
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.155324498 +0000 UTC m=+0.153456952 container died 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 01:07:38 np0005531754 systemd[1]: var-lib-containers-storage-overlay-a1649ea9195e6e408501d8e4a6e781008a15396d8c9f5330732b8ab2f8bb2cab-merged.mount: Deactivated successfully.
Nov 22 01:07:38 np0005531754 podman[287121]: 2025-11-22 06:07:38.202458436 +0000 UTC m=+0.200590870 container remove 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:07:38 np0005531754 systemd[1]: libpod-conmon-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope: Deactivated successfully.
Nov 22 01:07:38 np0005531754 podman[287160]: 2025-11-22 06:07:38.401305448 +0000 UTC m=+0.046777140 container create 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:07:38 np0005531754 systemd[1]: Started libpod-conmon-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope.
Nov 22 01:07:38 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:38 np0005531754 podman[287160]: 2025-11-22 06:07:38.381809054 +0000 UTC m=+0.027280796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:38 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:38 np0005531754 podman[287160]: 2025-11-22 06:07:38.502683587 +0000 UTC m=+0.148155299 container init 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:07:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:38 np0005531754 podman[287160]: 2025-11-22 06:07:38.510895178 +0000 UTC m=+0.156366870 container start 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 01:07:38 np0005531754 podman[287160]: 2025-11-22 06:07:38.54180776 +0000 UTC m=+0.187279452 container attach 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:07:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]: {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    "0": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "devices": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "/dev/loop3"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            ],
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_name": "ceph_lv0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_size": "21470642176",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "name": "ceph_lv0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "tags": {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_name": "ceph",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.crush_device_class": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.encrypted": "0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_id": "0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.vdo": "0"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            },
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "vg_name": "ceph_vg0"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        }
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    ],
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    "1": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "devices": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "/dev/loop4"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            ],
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_name": "ceph_lv1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_size": "21470642176",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "name": "ceph_lv1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "tags": {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_name": "ceph",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.crush_device_class": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.encrypted": "0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_id": "1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.vdo": "0"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            },
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "vg_name": "ceph_vg1"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        }
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    ],
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    "2": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "devices": [
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "/dev/loop5"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            ],
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_name": "ceph_lv2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_size": "21470642176",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "name": "ceph_lv2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "tags": {
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.cluster_name": "ceph",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.crush_device_class": "",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.encrypted": "0",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osd_id": "2",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:                "ceph.vdo": "0"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            },
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "type": "block",
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:            "vg_name": "ceph_vg2"
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:        }
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]:    ]
Nov 22 01:07:39 np0005531754 vigorous_nightingale[287176]: }
Nov 22 01:07:39 np0005531754 systemd[1]: libpod-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope: Deactivated successfully.
Nov 22 01:07:39 np0005531754 podman[287160]: 2025-11-22 06:07:39.306308326 +0000 UTC m=+0.951780048 container died 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:07:39 np0005531754 systemd[1]: var-lib-containers-storage-overlay-55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48-merged.mount: Deactivated successfully.
Nov 22 01:07:39 np0005531754 podman[287160]: 2025-11-22 06:07:39.36107774 +0000 UTC m=+1.006549432 container remove 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 01:07:39 np0005531754 systemd[1]: libpod-conmon-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope: Deactivated successfully.
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.131044983 +0000 UTC m=+0.055245449 container create 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 01:07:40 np0005531754 nova_compute[255660]: 2025-11-22 06:07:40.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:07:40 np0005531754 nova_compute[255660]: 2025-11-22 06:07:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:07:40 np0005531754 nova_compute[255660]: 2025-11-22 06:07:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:07:40 np0005531754 nova_compute[255660]: 2025-11-22 06:07:40.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:07:40 np0005531754 systemd[1]: Started libpod-conmon-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope.
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.10386824 +0000 UTC m=+0.028068797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.222088913 +0000 UTC m=+0.146289429 container init 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.233586532 +0000 UTC m=+0.157786998 container start 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.237541619 +0000 UTC m=+0.161742125 container attach 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:07:40 np0005531754 laughing_hertz[287354]: 167 167
Nov 22 01:07:40 np0005531754 systemd[1]: libpod-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope: Deactivated successfully.
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.239784829 +0000 UTC m=+0.163985335 container died 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:07:40 np0005531754 systemd[1]: var-lib-containers-storage-overlay-cbe1b99b8a7fb3a5a26391214e50d0b8a6e84bc0b6406897ebe2b261fce3195e-merged.mount: Deactivated successfully.
Nov 22 01:07:40 np0005531754 podman[287338]: 2025-11-22 06:07:40.285601212 +0000 UTC m=+0.209801718 container remove 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:07:40 np0005531754 systemd[1]: libpod-conmon-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope: Deactivated successfully.
Nov 22 01:07:40 np0005531754 podman[287376]: 2025-11-22 06:07:40.511391058 +0000 UTC m=+0.066640204 container create f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 01:07:40 np0005531754 systemd[1]: Started libpod-conmon-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope.
Nov 22 01:07:40 np0005531754 podman[287376]: 2025-11-22 06:07:40.483397656 +0000 UTC m=+0.038646842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:07:40 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:07:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:40 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:07:40 np0005531754 podman[287376]: 2025-11-22 06:07:40.606339974 +0000 UTC m=+0.161589130 container init f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 22 01:07:40 np0005531754 podman[287376]: 2025-11-22 06:07:40.619502999 +0000 UTC m=+0.174752125 container start f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:07:40 np0005531754 podman[287376]: 2025-11-22 06:07:40.623679511 +0000 UTC m=+0.178928657 container attach f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:07:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]: {
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_id": 1,
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "type": "bluestore"
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    },
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_id": 2,
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "type": "bluestore"
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    },
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_id": 0,
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:        "type": "bluestore"
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]:    }
Nov 22 01:07:41 np0005531754 unruffled_knuth[287393]: }
Nov 22 01:07:41 np0005531754 systemd[1]: libpod-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Deactivated successfully.
Nov 22 01:07:41 np0005531754 systemd[1]: libpod-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Consumed 1.071s CPU time.
Nov 22 01:07:41 np0005531754 podman[287426]: 2025-11-22 06:07:41.743384136 +0000 UTC m=+0.036531044 container died f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 01:07:41 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b-merged.mount: Deactivated successfully.
Nov 22 01:07:41 np0005531754 podman[287426]: 2025-11-22 06:07:41.796336192 +0000 UTC m=+0.089483110 container remove f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 01:07:41 np0005531754 systemd[1]: libpod-conmon-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Deactivated successfully.
Nov 22 01:07:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:07:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:41 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:07:41 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev af4cdb90-ac76-4a5c-9dbc-823175df7b23 does not exist
Nov 22 01:07:41 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 69596ee6-99ff-485b-9de9-4126da40d97b does not exist
Nov 22 01:07:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:42 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:07:43
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'backups', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:07:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:07:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:07:44 np0005531754 podman[287492]: 2025-11-22 06:07:44.263700518 +0000 UTC m=+0.110469183 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 01:07:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:07:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:07:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:07:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:07:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:07:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:07:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:54 np0005531754 podman[287522]: 2025-11-22 06:07:54.238693415 +0000 UTC m=+0.085572404 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 01:07:54 np0005531754 podman[287523]: 2025-11-22 06:07:54.292887613 +0000 UTC m=+0.130613996 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 01:07:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:07:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:07:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:03 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:15 np0005531754 podman[287561]: 2025-11-22 06:08:15.317688327 +0000 UTC m=+0.169088662 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 01:08:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:18 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:23 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.182 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.184 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:08:25 np0005531754 podman[287589]: 2025-11-22 06:08:25.234016574 +0000 UTC m=+0.082771778 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 01:08:25 np0005531754 podman[287588]: 2025-11-22 06:08:25.253839988 +0000 UTC m=+0.102956262 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 01:08:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:08:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327703627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.688 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.895 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.897 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4986MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.897 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.898 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.991 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:08:25 np0005531754 nova_compute[255660]: 2025-11-22 06:08:25.992 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.020 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:08:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:08:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335912961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.495 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.502 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.518 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.520 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:08:26 np0005531754 nova_compute[255660]: 2025-11-22 06:08:26.520 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:08:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:28 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:30 np0005531754 nova_compute[255660]: 2025-11-22 06:08:30.522 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:33 np0005531754 nova_compute[255660]: 2025-11-22 06:08:33.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:33 np0005531754 nova_compute[255660]: 2025-11-22 06:08:33.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:33 np0005531754 nova_compute[255660]: 2025-11-22 06:08:33.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:08:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:33 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:35 np0005531754 nova_compute[255660]: 2025-11-22 06:08:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:35 np0005531754 nova_compute[255660]: 2025-11-22 06:08:35.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:08:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:08:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:08:37 np0005531754 nova_compute[255660]: 2025-11-22 06:08:37.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:37 np0005531754 nova_compute[255660]: 2025-11-22 06:08:37.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:38 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:40 np0005531754 nova_compute[255660]: 2025-11-22 06:08:40.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:08:40 np0005531754 nova_compute[255660]: 2025-11-22 06:08:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:08:40 np0005531754 nova_compute[255660]: 2025-11-22 06:08:40.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:08:40 np0005531754 nova_compute[255660]: 2025-11-22 06:08:40.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:08:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.271663) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721271694, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1003, "num_deletes": 251, "total_data_size": 1436497, "memory_usage": 1460384, "flush_reason": "Manual Compaction"}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721350874, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1422880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32321, "largest_seqno": 33323, "table_properties": {"data_size": 1417883, "index_size": 2521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10559, "raw_average_key_size": 19, "raw_value_size": 1408014, "raw_average_value_size": 2617, "num_data_blocks": 113, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791625, "oldest_key_time": 1763791625, "file_creation_time": 1763791721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 79311 microseconds, and 4058 cpu microseconds.
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.350970) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1422880 bytes OK
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.350996) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379776) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379804) EVENT_LOG_v1 {"time_micros": 1763791721379796, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379827) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1431769, prev total WAL file size 1432926, number of live WAL files 2.
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.380768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1389KB)], [68(8628KB)]
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721380817, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10258039, "oldest_snapshot_seqno": -1}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6253 keys, 8474710 bytes, temperature: kUnknown
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721673187, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8474710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8434004, "index_size": 23956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 157762, "raw_average_key_size": 25, "raw_value_size": 8323007, "raw_average_value_size": 1331, "num_data_blocks": 974, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.673717) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8474710 bytes
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.682019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.1 rd, 29.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.4 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 6767, records dropped: 514 output_compression: NoCompression
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.682052) EVENT_LOG_v1 {"time_micros": 1763791721682037, "job": 38, "event": "compaction_finished", "compaction_time_micros": 292483, "compaction_time_cpu_micros": 30345, "output_level": 6, "num_output_files": 1, "total_output_size": 8474710, "num_input_records": 6767, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721682642, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721685764, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.380677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:41 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:08:42 np0005531754 podman[287844]: 2025-11-22 06:08:42.999184944 +0000 UTC m=+0.108882002 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 01:08:43 np0005531754 podman[287844]: 2025-11-22 06:08:43.12277351 +0000 UTC m=+0.232470568 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:08:43
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups']
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:08:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:08:43 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:08:43 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:08:44 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:08:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:45 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev c5283450-37bc-4f50-89ec-4e62043a10e1 does not exist
Nov 22 01:08:45 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 18798499-e11e-4907-ab75-5067c746f174 does not exist
Nov 22 01:08:45 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9a422c73-4575-4c3d-8199-ac1697c5cc49 does not exist
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:08:45 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:08:45 np0005531754 podman[288185]: 2025-11-22 06:08:45.544611212 +0000 UTC m=+0.139601199 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 01:08:45 np0005531754 podman[288305]: 2025-11-22 06:08:45.959107768 +0000 UTC m=+0.055925787 container create 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:08:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:08:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:46 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:08:46 np0005531754 systemd[1]: Started libpod-conmon-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope.
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:45.932714907 +0000 UTC m=+0.029532926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:46.058571704 +0000 UTC m=+0.155389793 container init 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:46.069032997 +0000 UTC m=+0.165851016 container start 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:46.072947341 +0000 UTC m=+0.169765430 container attach 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:08:46 np0005531754 systemd[1]: libpod-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope: Deactivated successfully.
Nov 22 01:08:46 np0005531754 stoic_kare[288321]: 167 167
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:46.079810506 +0000 UTC m=+0.176628545 container died 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:46 np0005531754 conmon[288321]: conmon 8c8f9db2a2da84e61741 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope/container/memory.events
Nov 22 01:08:46 np0005531754 systemd[1]: var-lib-containers-storage-overlay-0b01e8c561a2b5abc52c58b64c3ddcd1f321306d3c656c3d2b7d82559cb3f0d7-merged.mount: Deactivated successfully.
Nov 22 01:08:46 np0005531754 podman[288305]: 2025-11-22 06:08:46.137215621 +0000 UTC m=+0.234033640 container remove 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 01:08:46 np0005531754 systemd[1]: libpod-conmon-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope: Deactivated successfully.
Nov 22 01:08:46 np0005531754 podman[288344]: 2025-11-22 06:08:46.378035002 +0000 UTC m=+0.075255636 container create e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:08:46 np0005531754 podman[288344]: 2025-11-22 06:08:46.346114813 +0000 UTC m=+0.043335487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:46 np0005531754 systemd[1]: Started libpod-conmon-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope.
Nov 22 01:08:46 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:46 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:46 np0005531754 podman[288344]: 2025-11-22 06:08:46.508878134 +0000 UTC m=+0.206098758 container init e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:08:46 np0005531754 podman[288344]: 2025-11-22 06:08:46.520660681 +0000 UTC m=+0.217881285 container start e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:08:46 np0005531754 podman[288344]: 2025-11-22 06:08:46.525027518 +0000 UTC m=+0.222248212 container attach e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:08:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:08:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:08:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:08:47 np0005531754 vibrant_greider[288360]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:08:47 np0005531754 vibrant_greider[288360]: --> relative data size: 1.0
Nov 22 01:08:47 np0005531754 vibrant_greider[288360]: --> All data devices are unavailable
Nov 22 01:08:47 np0005531754 systemd[1]: libpod-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Deactivated successfully.
Nov 22 01:08:47 np0005531754 podman[288344]: 2025-11-22 06:08:47.623617255 +0000 UTC m=+1.320837889 container died e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:47 np0005531754 systemd[1]: libpod-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Consumed 1.054s CPU time.
Nov 22 01:08:47 np0005531754 systemd[1]: var-lib-containers-storage-overlay-b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e-merged.mount: Deactivated successfully.
Nov 22 01:08:47 np0005531754 podman[288344]: 2025-11-22 06:08:47.738015134 +0000 UTC m=+1.435235728 container remove e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:08:47 np0005531754 systemd[1]: libpod-conmon-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Deactivated successfully.
Nov 22 01:08:48 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:48 np0005531754 podman[288543]: 2025-11-22 06:08:48.562801452 +0000 UTC m=+0.032933618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:48 np0005531754 podman[288543]: 2025-11-22 06:08:48.665132016 +0000 UTC m=+0.135264122 container create f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 01:08:48 np0005531754 systemd[1]: Started libpod-conmon-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope.
Nov 22 01:08:48 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:48 np0005531754 podman[288543]: 2025-11-22 06:08:48.92874941 +0000 UTC m=+0.398881566 container init f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:08:48 np0005531754 podman[288543]: 2025-11-22 06:08:48.934412133 +0000 UTC m=+0.404544209 container start f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 01:08:48 np0005531754 trusting_hamilton[288559]: 167 167
Nov 22 01:08:48 np0005531754 systemd[1]: libpod-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope: Deactivated successfully.
Nov 22 01:08:49 np0005531754 podman[288543]: 2025-11-22 06:08:49.125837905 +0000 UTC m=+0.595970531 container attach f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:49 np0005531754 podman[288543]: 2025-11-22 06:08:49.126404 +0000 UTC m=+0.596536106 container died f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:08:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:49 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9bb915d66deb03c525bcca29de0b86233cf4345f414f273a1f98c5c8ed9f2431-merged.mount: Deactivated successfully.
Nov 22 01:08:50 np0005531754 podman[288543]: 2025-11-22 06:08:50.190073578 +0000 UTC m=+1.660205684 container remove f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 01:08:50 np0005531754 systemd[1]: libpod-conmon-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope: Deactivated successfully.
Nov 22 01:08:50 np0005531754 podman[288583]: 2025-11-22 06:08:50.390355388 +0000 UTC m=+0.043201784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:50 np0005531754 podman[288583]: 2025-11-22 06:08:50.745585609 +0000 UTC m=+0.398431955 container create abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 01:08:50 np0005531754 systemd[1]: Started libpod-conmon-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope.
Nov 22 01:08:50 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:50 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:50 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:50 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:50 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:51 np0005531754 podman[288583]: 2025-11-22 06:08:51.010883349 +0000 UTC m=+0.663729675 container init abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:08:51 np0005531754 podman[288583]: 2025-11-22 06:08:51.020411295 +0000 UTC m=+0.673257611 container start abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:08:51 np0005531754 podman[288583]: 2025-11-22 06:08:51.033877948 +0000 UTC m=+0.686724304 container attach abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:08:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]: {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    "0": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "devices": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "/dev/loop3"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            ],
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_name": "ceph_lv0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_size": "21470642176",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "name": "ceph_lv0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "tags": {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_name": "ceph",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.crush_device_class": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.encrypted": "0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_id": "0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.vdo": "0"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            },
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "vg_name": "ceph_vg0"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        }
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    ],
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    "1": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "devices": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "/dev/loop4"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            ],
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_name": "ceph_lv1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_size": "21470642176",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "name": "ceph_lv1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "tags": {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_name": "ceph",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.crush_device_class": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.encrypted": "0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_id": "1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.vdo": "0"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            },
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "vg_name": "ceph_vg1"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        }
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    ],
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    "2": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "devices": [
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "/dev/loop5"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            ],
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_name": "ceph_lv2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_size": "21470642176",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "name": "ceph_lv2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "tags": {
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.cluster_name": "ceph",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.crush_device_class": "",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.encrypted": "0",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osd_id": "2",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:                "ceph.vdo": "0"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            },
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "type": "block",
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:            "vg_name": "ceph_vg2"
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:        }
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]:    ]
Nov 22 01:08:51 np0005531754 vibrant_bohr[288600]: }
Nov 22 01:08:51 np0005531754 systemd[1]: libpod-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope: Deactivated successfully.
Nov 22 01:08:51 np0005531754 podman[288583]: 2025-11-22 06:08:51.775013095 +0000 UTC m=+1.427859451 container died abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 01:08:51 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34-merged.mount: Deactivated successfully.
Nov 22 01:08:51 np0005531754 podman[288583]: 2025-11-22 06:08:51.95876411 +0000 UTC m=+1.611610436 container remove abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 01:08:51 np0005531754 systemd[1]: libpod-conmon-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope: Deactivated successfully.
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:08:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.204801836 +0000 UTC m=+0.042596297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.336930321 +0000 UTC m=+0.174724742 container create d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 01:08:53 np0005531754 systemd[1]: Started libpod-conmon-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope.
Nov 22 01:08:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.42864905 +0000 UTC m=+0.266443511 container init d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.436022078 +0000 UTC m=+0.273816489 container start d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.440120719 +0000 UTC m=+0.277915140 container attach d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 01:08:53 np0005531754 magical_hugle[288777]: 167 167
Nov 22 01:08:53 np0005531754 systemd[1]: libpod-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope: Deactivated successfully.
Nov 22 01:08:53 np0005531754 conmon[288777]: conmon d194c11c2c155aee8847 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope/container/memory.events
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.446382388 +0000 UTC m=+0.284176799 container died d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 01:08:53 np0005531754 systemd[1]: var-lib-containers-storage-overlay-9460ec28d158d38c028abd02abcda9cfcb1df88568f6fc0f88ac313f5dcd4792-merged.mount: Deactivated successfully.
Nov 22 01:08:53 np0005531754 podman[288761]: 2025-11-22 06:08:53.5014592 +0000 UTC m=+0.339253611 container remove d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 01:08:53 np0005531754 systemd[1]: libpod-conmon-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope: Deactivated successfully.
Nov 22 01:08:53 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:53 np0005531754 podman[288800]: 2025-11-22 06:08:53.796272564 +0000 UTC m=+0.105820349 container create b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:08:53 np0005531754 podman[288800]: 2025-11-22 06:08:53.727732219 +0000 UTC m=+0.037280064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:08:53 np0005531754 systemd[1]: Started libpod-conmon-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope.
Nov 22 01:08:53 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:08:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:53 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:08:53 np0005531754 podman[288800]: 2025-11-22 06:08:53.98935688 +0000 UTC m=+0.298904725 container init b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 01:08:54 np0005531754 podman[288800]: 2025-11-22 06:08:54.002284939 +0000 UTC m=+0.311832694 container start b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 22 01:08:54 np0005531754 podman[288800]: 2025-11-22 06:08:54.006759559 +0000 UTC m=+0.316307344 container attach b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:08:55 np0005531754 kind_wilson[288816]: {
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_id": 1,
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "type": "bluestore"
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    },
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_id": 2,
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "type": "bluestore"
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    },
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_id": 0,
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:        "type": "bluestore"
Nov 22 01:08:55 np0005531754 kind_wilson[288816]:    }
Nov 22 01:08:55 np0005531754 kind_wilson[288816]: }
Nov 22 01:08:55 np0005531754 systemd[1]: libpod-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Deactivated successfully.
Nov 22 01:08:55 np0005531754 podman[288800]: 2025-11-22 06:08:55.089363486 +0000 UTC m=+1.398911281 container died b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:08:55 np0005531754 systemd[1]: libpod-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Consumed 1.091s CPU time.
Nov 22 01:08:55 np0005531754 systemd[1]: var-lib-containers-storage-overlay-3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38-merged.mount: Deactivated successfully.
Nov 22 01:08:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:56 np0005531754 podman[288800]: 2025-11-22 06:08:56.124547707 +0000 UTC m=+2.434095502 container remove b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 01:08:56 np0005531754 systemd[1]: libpod-conmon-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Deactivated successfully.
Nov 22 01:08:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:08:56 np0005531754 podman[288863]: 2025-11-22 06:08:56.242268545 +0000 UTC m=+0.088165834 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 22 01:08:56 np0005531754 podman[288864]: 2025-11-22 06:08:56.271204304 +0000 UTC m=+0.116768453 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 01:08:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:56 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:08:56 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 85385dd6-182b-4f4f-9d17-5084148fcf52 does not exist
Nov 22 01:08:56 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 5dbc774d-9a51-4542-87b5-c3d5af2a8f3e does not exist
Nov 22 01:08:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:08:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:08:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:08:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:16 np0005531754 podman[288953]: 2025-11-22 06:09:16.245465072 +0000 UTC m=+0.099182251 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:09:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:19 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:19 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:21 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:23 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:24 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:09:25 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:25 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:09:25 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469962475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.661 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.837 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.903 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.904 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 01:09:25 np0005531754 nova_compute[255660]: 2025-11-22 06:09:25.926 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 01:09:26 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 01:09:26 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523048568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 01:09:26 np0005531754 nova_compute[255660]: 2025-11-22 06:09:26.354 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 01:09:26 np0005531754 nova_compute[255660]: 2025-11-22 06:09:26.360 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 01:09:26 np0005531754 nova_compute[255660]: 2025-11-22 06:09:26.375 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 01:09:26 np0005531754 nova_compute[255660]: 2025-11-22 06:09:26.378 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 01:09:26 np0005531754 nova_compute[255660]: 2025-11-22 06:09:26.378 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:09:27 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:27 np0005531754 podman[289026]: 2025-11-22 06:09:27.229156909 +0000 UTC m=+0.074304881 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 01:09:27 np0005531754 podman[289025]: 2025-11-22 06:09:27.247692927 +0000 UTC m=+0.097511046 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:09:29 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:29 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:31 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:31 np0005531754 nova_compute[255660]: 2025-11-22 06:09:31.380 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:33 np0005531754 nova_compute[255660]: 2025-11-22 06:09:33.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:33 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:34 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:34 np0005531754 nova_compute[255660]: 2025-11-22 06:09:34.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:34 np0005531754 nova_compute[255660]: 2025-11-22 06:09:34.148 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:34 np0005531754 nova_compute[255660]: 2025-11-22 06:09:34.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 01:09:35 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:36 np0005531754 nova_compute[255660]: 2025-11-22 06:09:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:36 np0005531754 nova_compute[255660]: 2025-11-22 06:09:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.952 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 01:09:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.953 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 01:09:36 np0005531754 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.953 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 01:09:37 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:39 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:39 np0005531754 nova_compute[255660]: 2025-11-22 06:09:39.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:39 np0005531754 nova_compute[255660]: 2025-11-22 06:09:39.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:39 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:41 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:42 np0005531754 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 01:09:42 np0005531754 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 01:09:42 np0005531754 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 01:09:42 np0005531754 nova_compute[255660]: 2025-11-22 06:09:42.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:09:43
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups']
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:09:43 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:09:44 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:09:44 np0005531754 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 01:09:45 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:47 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 01:09:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 01:09:47 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 01:09:47 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 01:09:47 np0005531754 podman[289062]: 2025-11-22 06:09:47.320455267 +0000 UTC m=+0.169579931 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 01:09:49 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:49 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:51 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 01:09:53 np0005531754 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 01:09:54 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:55 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:56 np0005531754 systemd-logind[798]: New session 54 of user zuul.
Nov 22 01:09:56 np0005531754 systemd[1]: Started Session 54 of User zuul.
Nov 22 01:09:57 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:09:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:09:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 01:09:57 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:09:57 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 01:09:58 np0005531754 podman[289256]: 2025-11-22 06:09:58.018127362 +0000 UTC m=+0.085023471 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 01:09:58 np0005531754 podman[289257]: 2025-11-22 06:09:58.047343773 +0000 UTC m=+0.120156910 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:09:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev cddb69e4-b68c-439f-acbc-08a4f3edbe43 does not exist
Nov 22 01:09:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev f38dc9e0-6f6d-4bbf-9726-8f4a6c64751f does not exist
Nov 22 01:09:58 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev a9c86435-e90a-4966-aa81-3c371a0bc942 does not exist
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:09:58 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 01:09:59 np0005531754 podman[289538]: 2025-11-22 06:09:59.078990901 +0000 UTC m=+0.023461438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:09:59 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:09:59 np0005531754 podman[289538]: 2025-11-22 06:09:59.525713864 +0000 UTC m=+0.470184381 container create b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:09:59.744174) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791799744253, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 883, "num_deletes": 255, "total_data_size": 1184748, "memory_usage": 1202480, "flush_reason": "Manual Compaction"}
Nov 22 01:09:59 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 22 01:09:59 np0005531754 systemd[1]: Started libpod-conmon-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope.
Nov 22 01:09:59 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800026072, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1162748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33324, "largest_seqno": 34206, "table_properties": {"data_size": 1158315, "index_size": 2085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9621, "raw_average_key_size": 19, "raw_value_size": 1149390, "raw_average_value_size": 2289, "num_data_blocks": 93, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791721, "oldest_key_time": 1763791721, "file_creation_time": 1763791799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 281978 microseconds, and 7059 cpu microseconds.
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:10:00 np0005531754 podman[289538]: 2025-11-22 06:10:00.030330224 +0000 UTC m=+0.974800821 container init b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 01:10:00 np0005531754 podman[289538]: 2025-11-22 06:10:00.039906309 +0000 UTC m=+0.984376826 container start b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 01:10:00 np0005531754 gallant_sinoussi[289567]: 167 167
Nov 22 01:10:00 np0005531754 systemd[1]: libpod-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope: Deactivated successfully.
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.026154) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1162748 bytes OK
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.026189) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.365945) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.365981) EVENT_LOG_v1 {"time_micros": 1763791800365970, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1180400, prev total WAL file size 1181557, number of live WAL files 2.
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366907) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303039' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1135KB)], [71(8276KB)]
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800366945, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9637458, "oldest_snapshot_seqno": -1}
Nov 22 01:10:00 np0005531754 podman[289538]: 2025-11-22 06:10:00.425397317 +0000 UTC m=+1.369867914 container attach b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:10:00 np0005531754 podman[289547]: 2025-11-22 06:10:00.42627636 +0000 UTC m=+1.352446298 container died b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6233 keys, 9375817 bytes, temperature: kUnknown
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800486867, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9375817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9334259, "index_size": 24872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 158273, "raw_average_key_size": 25, "raw_value_size": 9222541, "raw_average_value_size": 1479, "num_data_blocks": 1009, "num_entries": 6233, "num_filter_entries": 6233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.487238) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9375817 bytes
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.494278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 78.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(16.4) write-amplify(8.1) OK, records in: 6755, records dropped: 522 output_compression: NoCompression
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.494300) EVENT_LOG_v1 {"time_micros": 1763791800494289, "job": 40, "event": "compaction_finished", "compaction_time_micros": 119997, "compaction_time_cpu_micros": 43836, "output_level": 6, "num_output_files": 1, "total_output_size": 9375817, "num_input_records": 6755, "num_output_records": 6233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800494625, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800496121, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 01:10:00 np0005531754 systemd[1]: var-lib-containers-storage-overlay-47711594683d10edb41c568d354586c6124526cf77bfaf8062f1f82800152f23-merged.mount: Deactivated successfully.
Nov 22 01:10:00 np0005531754 podman[289538]: 2025-11-22 06:10:00.615030702 +0000 UTC m=+1.559501219 container remove b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 01:10:00 np0005531754 systemd[1]: libpod-conmon-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope: Deactivated successfully.
Nov 22 01:10:00 np0005531754 podman[289619]: 2025-11-22 06:10:00.78524868 +0000 UTC m=+0.042361952 container create 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 01:10:00 np0005531754 systemd[1]: Started libpod-conmon-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope.
Nov 22 01:10:00 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:00 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:00 np0005531754 podman[289619]: 2025-11-22 06:10:00.767141006 +0000 UTC m=+0.024254298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:10:00 np0005531754 podman[289619]: 2025-11-22 06:10:00.906118408 +0000 UTC m=+0.163231730 container init 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 01:10:00 np0005531754 podman[289619]: 2025-11-22 06:10:00.917652776 +0000 UTC m=+0.174766058 container start 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 01:10:00 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14815 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:00 np0005531754 podman[289619]: 2025-11-22 06:10:00.922557268 +0000 UTC m=+0.179670610 container attach 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 01:10:01 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:01 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:01 np0005531754 exciting_franklin[289636]: --> passed data devices: 0 physical, 3 LVM
Nov 22 01:10:01 np0005531754 exciting_franklin[289636]: --> relative data size: 1.0
Nov 22 01:10:01 np0005531754 exciting_franklin[289636]: --> All data devices are unavailable
Nov 22 01:10:01 np0005531754 systemd[1]: libpod-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope: Deactivated successfully.
Nov 22 01:10:01 np0005531754 podman[289619]: 2025-11-22 06:10:01.98280631 +0000 UTC m=+1.239919632 container died 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:10:02 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 01:10:02 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562757294' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 01:10:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b-merged.mount: Deactivated successfully.
Nov 22 01:10:02 np0005531754 podman[289619]: 2025-11-22 06:10:02.123189329 +0000 UTC m=+1.380302601 container remove 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:10:02 np0005531754 systemd[1]: libpod-conmon-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope: Deactivated successfully.
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.776126322 +0000 UTC m=+0.047263374 container create ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:10:02 np0005531754 systemd[1]: Started libpod-conmon-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope.
Nov 22 01:10:02 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.751571185 +0000 UTC m=+0.022708197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.855314277 +0000 UTC m=+0.126451319 container init ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.864561274 +0000 UTC m=+0.135698306 container start ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:10:02 np0005531754 elegant_leakey[289919]: 167 167
Nov 22 01:10:02 np0005531754 systemd[1]: libpod-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope: Deactivated successfully.
Nov 22 01:10:02 np0005531754 conmon[289919]: conmon ab7a0482abab772f2f32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope/container/memory.events
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.871699725 +0000 UTC m=+0.142836787 container attach ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.872135326 +0000 UTC m=+0.143272348 container died ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 01:10:02 np0005531754 systemd[1]: var-lib-containers-storage-overlay-e223320b65549e2ec0b10ff747c3dc9a9d6ba3002e12074442b93f135b91fdf3-merged.mount: Deactivated successfully.
Nov 22 01:10:02 np0005531754 podman[289902]: 2025-11-22 06:10:02.916738257 +0000 UTC m=+0.187875279 container remove ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 01:10:02 np0005531754 systemd[1]: libpod-conmon-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope: Deactivated successfully.
Nov 22 01:10:03 np0005531754 podman[289943]: 2025-11-22 06:10:03.128743981 +0000 UTC m=+0.072690703 container create 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:10:03 np0005531754 systemd[1]: Started libpod-conmon-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope.
Nov 22 01:10:03 np0005531754 podman[289943]: 2025-11-22 06:10:03.090828058 +0000 UTC m=+0.034774790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:10:03 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:03 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:03 np0005531754 podman[289943]: 2025-11-22 06:10:03.220629895 +0000 UTC m=+0.164576597 container init 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 01:10:03 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:03 np0005531754 podman[289943]: 2025-11-22 06:10:03.233353645 +0000 UTC m=+0.177300337 container start 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:10:03 np0005531754 podman[289943]: 2025-11-22 06:10:03.236385256 +0000 UTC m=+0.180331978 container attach 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]: {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    "0": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "devices": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "/dev/loop3"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            ],
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_name": "ceph_lv0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_size": "21470642176",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "name": "ceph_lv0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "tags": {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_name": "ceph",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.crush_device_class": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.encrypted": "0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_id": "0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.vdo": "0"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            },
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "vg_name": "ceph_vg0"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        }
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    ],
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    "1": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "devices": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "/dev/loop4"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            ],
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_name": "ceph_lv1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_size": "21470642176",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "name": "ceph_lv1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "tags": {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_name": "ceph",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.crush_device_class": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.encrypted": "0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_id": "1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.vdo": "0"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            },
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "vg_name": "ceph_vg1"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        }
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    ],
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    "2": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "devices": [
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "/dev/loop5"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            ],
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_name": "ceph_lv2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_size": "21470642176",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "name": "ceph_lv2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "tags": {
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cephx_lockbox_secret": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.cluster_name": "ceph",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.crush_device_class": "",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.encrypted": "0",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osd_id": "2",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:                "ceph.vdo": "0"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            },
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "type": "block",
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:            "vg_name": "ceph_vg2"
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:        }
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]:    ]
Nov 22 01:10:04 np0005531754 optimistic_gould[289960]: }
Nov 22 01:10:04 np0005531754 systemd[1]: libpod-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope: Deactivated successfully.
Nov 22 01:10:04 np0005531754 podman[289943]: 2025-11-22 06:10:04.069530812 +0000 UTC m=+1.013477534 container died 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 01:10:04 np0005531754 systemd[1]: var-lib-containers-storage-overlay-f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc-merged.mount: Deactivated successfully.
Nov 22 01:10:04 np0005531754 podman[289943]: 2025-11-22 06:10:04.351082193 +0000 UTC m=+1.295028895 container remove 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 22 01:10:04 np0005531754 systemd[1]: libpod-conmon-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope: Deactivated successfully.
Nov 22 01:10:04 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:10:04 np0005531754 ovs-vsctl[290138]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 01:10:04 np0005531754 podman[290155]: 2025-11-22 06:10:04.942420689 +0000 UTC m=+0.038424188 container create a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 01:10:04 np0005531754 systemd[1]: Started libpod-conmon-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope.
Nov 22 01:10:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:04.926267617 +0000 UTC m=+0.022271136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:05.030195754 +0000 UTC m=+0.126199303 container init a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:05.038514267 +0000 UTC m=+0.134517756 container start a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:05.042446261 +0000 UTC m=+0.138449750 container attach a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:10:05 np0005531754 blissful_shockley[290184]: 167 167
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:05.044763713 +0000 UTC m=+0.140767192 container died a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 01:10:05 np0005531754 systemd[1]: libpod-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope: Deactivated successfully.
Nov 22 01:10:05 np0005531754 systemd[1]: var-lib-containers-storage-overlay-2c8810eeee41da6ab90c5dd9506ff7f8cd2c105670c56483070c20cbe75f6b29-merged.mount: Deactivated successfully.
Nov 22 01:10:05 np0005531754 podman[290155]: 2025-11-22 06:10:05.094486801 +0000 UTC m=+0.190490290 container remove a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 01:10:05 np0005531754 systemd[1]: libpod-conmon-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope: Deactivated successfully.
Nov 22 01:10:05 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:05 np0005531754 podman[290226]: 2025-11-22 06:10:05.312785822 +0000 UTC m=+0.068877770 container create 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 01:10:05 np0005531754 systemd[1]: Started libpod-conmon-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope.
Nov 22 01:10:05 np0005531754 podman[290226]: 2025-11-22 06:10:05.281103877 +0000 UTC m=+0.037195845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 01:10:05 np0005531754 systemd[1]: Started libcrun container.
Nov 22 01:10:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:05 np0005531754 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 01:10:05 np0005531754 podman[290226]: 2025-11-22 06:10:05.407906913 +0000 UTC m=+0.163998871 container init 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 01:10:05 np0005531754 podman[290226]: 2025-11-22 06:10:05.418119887 +0000 UTC m=+0.174211845 container start 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 01:10:05 np0005531754 podman[290226]: 2025-11-22 06:10:05.422410531 +0000 UTC m=+0.178502479 container attach 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 01:10:05 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 01:10:05 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 01:10:05 np0005531754 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]: {
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_id": 1,
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "type": "bluestore"
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    },
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_id": 2,
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "type": "bluestore"
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    },
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_id": 0,
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:        "type": "bluestore"
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]:    }
Nov 22 01:10:06 np0005531754 confident_sanderson[290247]: }
Nov 22 01:10:06 np0005531754 systemd[1]: libpod-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope: Deactivated successfully.
Nov 22 01:10:06 np0005531754 conmon[290247]: conmon 344e5bf1639ae4c4374c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope/container/memory.events
Nov 22 01:10:06 np0005531754 podman[290226]: 2025-11-22 06:10:06.408263276 +0000 UTC m=+1.164355214 container died 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 01:10:06 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: cache status {prefix=cache status} (starting...)
Nov 22 01:10:06 np0005531754 systemd[1]: var-lib-containers-storage-overlay-ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e-merged.mount: Deactivated successfully.
Nov 22 01:10:06 np0005531754 podman[290226]: 2025-11-22 06:10:06.475547614 +0000 UTC m=+1.231639572 container remove 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 01:10:06 np0005531754 systemd[1]: libpod-conmon-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope: Deactivated successfully.
Nov 22 01:10:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 01:10:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:10:06 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 01:10:06 np0005531754 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:10:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 81db3c30-8a03-47f7-bea4-53062f107fd9 does not exist
Nov 22 01:10:06 np0005531754 ceph-mgr[76134]: [progress WARNING root] complete: ev 9c32bb57-be9f-4e78-a679-7f025cd80996 does not exist
Nov 22 01:10:06 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: client ls {prefix=client ls} (starting...)
Nov 22 01:10:06 np0005531754 lvm[290609]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 01:10:06 np0005531754 lvm[290609]: VG ceph_vg0 finished
Nov 22 01:10:06 np0005531754 lvm[290628]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 01:10:06 np0005531754 lvm[290628]: VG ceph_vg1 finished
Nov 22 01:10:06 np0005531754 lvm[290642]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 01:10:06 np0005531754 lvm[290642]: VG ceph_vg2 finished
Nov 22 01:10:06 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 01:10:07 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 01:10:07 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14823 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 01:10:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:10:07 np0005531754 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 01:10:07 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 01:10:07 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666805923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 01:10:07 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 01:10:08 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14829 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:08 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:10:08 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:08.128+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:10:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069858798' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 01:10:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055500484' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 01:10:08 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: ops {prefix=ops} (starting...)
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019329833' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 01:10:08 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582277635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417081325' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 01:10:09 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:09 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session ls {prefix=session ls} (starting...)
Nov 22 01:10:09 np0005531754 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: status {prefix=status} (starting...)
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87260242' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 01:10:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14843 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 01:10:09 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118891367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 01:10:09 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14847 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545630278' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398278668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631893294' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 01:10:10 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384306363' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 01:10:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 01:10:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156803746' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 01:10:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:11 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 01:10:11 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:11.091+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 01:10:11 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14863 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:11 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 01:10:11 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976020924' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 01:10:11 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667366673' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931283969' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 01:10:12 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660957875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 345.672515869s of 345.679687500s, submitted: 2
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 1769472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1662976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 1613824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27ae5fc00
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 ms_handle_reset con 0x55c27d491c00 session 0x55c27c84cd20
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:12 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.100036621s of 600.098510742s, submitted: 90
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 187.620025635s of 187.940231323s, submitted: 90
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981955 data_alloc: 218103808 data_used: 184320
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 24182784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 130 ms_handle_reset con 0x55c27dd64c00 session 0x55c27b9cda40
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24158208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10bd8da/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,1])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba98000/0x0/0x4ffc00000, data 0x10bd90d/0x1185000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 131 ms_handle_reset con 0x55c27dd65000 session 0x55c27d99da40
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991578 data_alloc: 218103808 data_used: 188416
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228631973s of 10.493903160s, submitted: 48
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993528 data_alloc: 218103808 data_used: 192512
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.048984528s of 17.207933426s, submitted: 15
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 23986176 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 23969792 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 10
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba8c000/0x0/0x4ffc00000, data 0x10c6f88/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 23945216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996008 data_alloc: 218103808 data_used: 192512
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 23879680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 23617536 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 23306240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000508 data_alloc: 218103808 data_used: 192512
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 23371776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 23240704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 11
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.229097366s of 10.394592285s, submitted: 43
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10e64b7/0x11b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 23126016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 23117824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 23044096 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001568 data_alloc: 218103808 data_used: 192512
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba61000/0x0/0x4ffc00000, data 0x10f0ecd/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 23019520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba57000/0x0/0x4ffc00000, data 0x10fbde4/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 21725184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 20561920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba4b000/0x0/0x4ffc00000, data 0x110823a/0x11d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008874 data_alloc: 218103808 data_used: 200704
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060367584s of 10.409746170s, submitted: 65
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1116701/0x11e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007998 data_alloc: 218103808 data_used: 200704
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 20340736 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x11254f0/0x11f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 20250624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 20373504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 20299776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013872 data_alloc: 218103808 data_used: 208896
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 20226048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x113fb10/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.961433411s of 10.210658073s, submitted: 54
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010722 data_alloc: 218103808 data_used: 208896
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 20078592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 20054016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9fb000/0x0/0x4ffc00000, data 0x1156181/0x1223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 20045824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 19922944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9f6000/0x0/0x4ffc00000, data 0x115af0e/0x1228000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011714 data_alloc: 218103808 data_used: 208896
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa854000/0x0/0x4ffc00000, data 0x115cd59/0x122a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.710002899s of 10.840860367s, submitted: 28
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x1166919/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010124 data_alloc: 218103808 data_used: 208896
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011676 data_alloc: 218103808 data_used: 208896
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 17539072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.881333351s of 10.000229836s, submitted: 24
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1180b27/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017194 data_alloc: 218103808 data_used: 217088
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 16302080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0x118a04b/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 16261120 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16220160 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020498 data_alloc: 218103808 data_used: 217088
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0x119b383/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 16097280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.363933563s of 10.000885010s, submitted: 68
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022282 data_alloc: 218103808 data_used: 217088
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7f7000/0x0/0x4ffc00000, data 0x11b5e3b/0x1286000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 16187392 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 16138240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027624 data_alloc: 218103808 data_used: 225280
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7e0000/0x0/0x4ffc00000, data 0x11ccd00/0x129d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 15966208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 14663680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.663500786s of 10.003384590s, submitted: 58
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x11f6ec0/0x12c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030366 data_alloc: 218103808 data_used: 225280
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 14467072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa796000/0x0/0x4ffc00000, data 0x1216d90/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa783000/0x0/0x4ffc00000, data 0x122a36a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 13795328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 13811712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052002 data_alloc: 218103808 data_used: 233472
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 13123584 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa71f000/0x0/0x4ffc00000, data 0x12889a8/0x135d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x12a03d6/0x1375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.410496712s of 10.063361168s, submitted: 153
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 12115968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049980 data_alloc: 218103808 data_used: 233472
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12025856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 11812864 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 10698752 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa6c4000/0x0/0x4ffc00000, data 0x12e34ad/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 11739136 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062908 data_alloc: 218103808 data_used: 241664
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 12328960 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0x130b8fd/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.388790131s of 10.036962509s, submitted: 152
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 12066816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 11968512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067180 data_alloc: 218103808 data_used: 249856
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 11952128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa65e000/0x0/0x4ffc00000, data 0x1348b92/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 10993664 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa647000/0x0/0x4ffc00000, data 0x1360b19/0x1437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069652 data_alloc: 218103808 data_used: 258048
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 10616832 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.325051308s of 10.040717125s, submitted: 60
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075246 data_alloc: 218103808 data_used: 266240
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa629000/0x0/0x4ffc00000, data 0x137b131/0x1454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 10461184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079110 data_alloc: 218103808 data_used: 266240
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 10428416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 10395648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ed000/0x0/0x4ffc00000, data 0x13b6889/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.324265480s of 11.515779495s, submitted: 35
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077896 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 10321920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ee000/0x0/0x4ffc00000, data 0x13b67ee/0x1490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086088 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9232384 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa59e000/0x0/0x4ffc00000, data 0x1404cc1/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9199616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 9625600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093304 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734145164s of 11.004765511s, submitted: 52
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa56b000/0x0/0x4ffc00000, data 0x1438f3c/0x1513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x147084c/0x154b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095534 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9150464 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x14983d4/0x1572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095530 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.397135735s of 11.280517578s, submitted: 59
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7593984 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 7479296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x14d321d/0x15ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098050 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 7462912 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099392 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.184447289s of 12.378032684s, submitted: 30
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098094 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098462 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 12
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.734287262s of 10.033769608s, submitted: 19
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4cd000/0x0/0x4ffc00000, data 0x14d592e/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099814 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101084 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100090 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359554291s of 11.549299240s, submitted: 18
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 7864320 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d5816/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.601215363s of 11.686765671s, submitted: 14
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100378 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101794 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d58b5/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926100731s of 11.090178490s, submitted: 16
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101618 data_alloc: 218103808 data_used: 274432
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 7979008 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106954 data_alloc: 218103808 data_used: 282624
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d742f/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x14d7530/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.746568680s of 10.998859406s, submitted: 51
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108006 data_alloc: 218103808 data_used: 282624
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115008 data_alloc: 218103808 data_used: 290816
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14daac0/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc543/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119046 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848780632s of 10.849118233s, submitted: 70
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc608/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119676 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc541/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119372 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.558925629s of 10.797169685s, submitted: 23
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120626 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 7766016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 ms_handle_reset con 0x55c27dd65800 session 0x55c27d401c20
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14dc44b/0x15bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 7143424 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121304 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 13
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979153633s of 11.168646812s, submitted: 206
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120872 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc5e2/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122944 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc5ad/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.424748421s of 10.796654701s, submitted: 27
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 7069696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123366 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc5e1/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 6905856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1508cc2/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130868 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 5865472 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90677248 unmapped: 5750784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 5332992 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa02d000/0x0/0x4ffc00000, data 0x155d2e5/0x163f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.726997375s of 10.987854004s, submitted: 60
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142070 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 4874240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 4759552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9ff1000/0x0/0x4ffc00000, data 0x159c2c8/0x167d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143868 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 3416064 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 3145728 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.212817192s of 10.548931122s, submitted: 84
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151538 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 3604480 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 3596288 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 3563520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158170 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2383872 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2220032 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9eb9000/0x0/0x4ffc00000, data 0x16d2996/0x17b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 2670592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93806592 unmapped: 2621440 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x1705e04/0x17e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154906 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.826278687s of 11.162016869s, submitted: 70
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e79000/0x0/0x4ffc00000, data 0x1714710/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93995008 unmapped: 2433024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e56000/0x0/0x4ffc00000, data 0x1736fbe/0x1817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2236416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2228224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e42000/0x0/0x4ffc00000, data 0x174b9d7/0x182c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1179648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161486 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e15000/0x0/0x4ffc00000, data 0x17766f9/0x1858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 1916928 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x17933f1/0x1874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 1867776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170682 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 2760704 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.836094856s of 10.913021088s, submitted: 68
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d8f000/0x0/0x4ffc00000, data 0x17fcccc/0x18de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95117312 unmapped: 2359296 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1433600 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172456 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d5a000/0x0/0x4ffc00000, data 0x1832e2f/0x1913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180160 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 1990656 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 2457600 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.346602440s of 10.624962807s, submitted: 67
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96133120 unmapped: 2392064 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cf2000/0x0/0x4ffc00000, data 0x189b875/0x197c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s#012Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179516 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 2260992 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cc2000/0x0/0x4ffc00000, data 0x18ca961/0x19ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 2039808 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27c775400
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670079231s of 13.884990692s, submitted: 22
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177262 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.552337646s of 21.707801819s, submitted: 8
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176268 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177860 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.837540627s of 11.850649834s, submitted: 3
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179404 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177186 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178762 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.027555466s of 13.073743820s, submitted: 11
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179664 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.837194443s of 10.898418427s, submitted: 7
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 827392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 901120 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184054 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc750/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185166 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 1933312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.652610779s of 11.716604233s, submitted: 16
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185554 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 1867776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186584 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b3/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185590 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.021935463s of 12.413156509s, submitted: 105
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186668 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185978 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185128 data_alloc: 218103808 data_used: 299008
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.800412178s of 13.922379494s, submitted: 10
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189286 data_alloc: 218103808 data_used: 307200
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 843776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189462 data_alloc: 218103808 data_used: 307200
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.972100258s of 12.103911400s, submitted: 26
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192404 data_alloc: 218103808 data_used: 315392
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 14
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfaff/0x19c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc11/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 761856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193468 data_alloc: 218103808 data_used: 315392
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193644 data_alloc: 218103808 data_used: 315392
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.880873680s of 12.922811508s, submitted: 19
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18dfcd0/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198760 data_alloc: 218103808 data_used: 315392
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc35/0x19c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197700 data_alloc: 218103808 data_used: 323584
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.758671761s of 10.983880043s, submitted: 61
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197876 data_alloc: 218103808 data_used: 323584
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201698 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.046489716s of 11.064700127s, submitted: 14
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202762 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202586 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.863492966s of 10.986426353s, submitted: 33
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18e4d7e/0x19ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206958 data_alloc: 218103808 data_used: 339968
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1810432 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214634 data_alloc: 218103808 data_used: 352256
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e8559/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214978 data_alloc: 218103808 data_used: 352256
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.761722565s of 12.017519951s, submitted: 41
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 1761280 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 1744896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223380 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c95000/0x0/0x4ffc00000, data 0x18ebca8/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220608 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.814929008s of 10.932528496s, submitted: 43
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c98000/0x0/0x4ffc00000, data 0x18ebaf8/0x19d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224766 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226534 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.052343369s of 13.091160774s, submitted: 15
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225494 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c94000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226380 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228808 data_alloc: 218103808 data_used: 376832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.807965279s of 14.139179230s, submitted: 58
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229000 data_alloc: 218103808 data_used: 376832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 161 heartbeat osd_stat(store_statfs(0x4f9c8a000/0x0/0x4ffc00000, data 0x18f27c6/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236804 data_alloc: 218103808 data_used: 385024
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100155392 unmapped: 1515520 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239106 data_alloc: 218103808 data_used: 385024
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.630488396s of 11.893076897s, submitted: 66
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242096 data_alloc: 218103808 data_used: 385024
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 163 heartbeat osd_stat(store_statfs(0x4f9c85000/0x0/0x4ffc00000, data 0x18f5e0f/0x19e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245550 data_alloc: 218103808 data_used: 393216
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.243680000s of 11.435800552s, submitted: 63
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c81000/0x0/0x4ffc00000, data 0x18f7ac0/0x19ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249940 data_alloc: 218103808 data_used: 393216
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100229120 unmapped: 1441792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 166 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x18fb08e/0x19f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255022 data_alloc: 218103808 data_used: 393216
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x18fcca4/0x19f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.215492249s of 12.445914268s, submitted: 69
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258316 data_alloc: 218103808 data_used: 401408
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258492 data_alloc: 218103808 data_used: 401408
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100286464 unmapped: 1384448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 169 ms_handle_reset con 0x55c27dd65000 session 0x55c27f3a21e0
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 1048576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 15
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261466 data_alloc: 218103808 data_used: 401408
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.873164177s of 11.996927261s, submitted: 252
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260762 data_alloc: 218103808 data_used: 401408
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9864000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.363555908s of 56.390811920s, submitted: 15
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 ms_handle_reset con 0x55c27dd65400 session 0x55c27c84cd20
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Got map version 16
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 638976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1933312 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2179072 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 2564096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf dump' '{prefix=perf dump}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf schema' '{prefix=perf schema}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 13484032 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100450304 unmapped: 13312000 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100450304 unmapped: 13312000 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 13287424 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 13287424 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 13279232 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100556800 unmapped: 13205504 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2769 syncs, 3.81 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1633 writes, 3817 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 2.01 MB, 0.00 MB/s#012Interval WAL: 1633 writes, 528 syncs, 3.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 13164544 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 361.382568359s of 361.418334961s, submitted: 158
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 13017088 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 12697600 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 12222464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 12517376 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:13 np0005531754 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 01:10:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256331948' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14881 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:13 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 01:10:13 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725595137' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 01:10:13 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14885 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 01:10:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 01:10:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407678340' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 01:10:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14889 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:10:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 01:10:14 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 01:10:14 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365768524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 01:10:14 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14893 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:10:15 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:15 np0005531754 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14899 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 01:10:15 np0005531754 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:10:15 np0005531754 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:15.488+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 01:10:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 01:10:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946734465' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 01:10:15 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 01:10:15 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208711715' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660098652' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451135040' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310310766' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/286854755' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 01:10:16 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642374162' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239950706' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 01:10:17 np0005531754 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939297151' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579627788' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 01:10:17 np0005531754 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199070274' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 315.175384521s of 315.207031250s, submitted: 8
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: mgrc ms_handle_reset ms_handle_reset con 0x55e99eb0fc00
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: mgrc handle_mgr_configure stats_period=5
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 802816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e99f657800 session 0x55e99f863680
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e9a038a400 session 0x55e99ff2c000
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 6951 writes, 28K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6951 writes, 1245 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.096008301s of 600.027343750s, submitted: 90
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:17 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 186.598068237s of 186.921478271s, submitted: 90
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896151 data_alloc: 218103808 data_used: 237568
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc5c0000/0x0/0x4ffc00000, data 0x59b775/0x65e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 10878976 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 130 ms_handle_reset con 0x55e99f657400 session 0x55e9a24a7e00
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 10887168 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 18210816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 ms_handle_reset con 0x55e9a104e400 session 0x55e9a24ae3c0
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.570235252s of 11.802167892s, submitted: 32
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.628356934s of 18.760848999s, submitted: 13
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Got map version 10
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050071 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Got map version 11
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 17080320 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 17063936 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x15a0a6f/0x1669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.825894356s of 10.000619888s, submitted: 13
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054317 data_alloc: 218103808 data_used: 253952
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b3000/0x0/0x4ffc00000, data 0x15a0c9e/0x166a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059907 data_alloc: 218103808 data_used: 262144
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a294e/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17039360 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553850174s of 10.000466347s, submitted: 42
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057663 data_alloc: 218103808 data_used: 262144
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 16998400 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062723 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 16957440 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791356087s of 10.000161171s, submitted: 30
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064529 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a46d9/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 16908288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063005 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a47a3/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.954462051s of 10.000647545s, submitted: 8
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066317 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 16834560 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a486d/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066141 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a489c/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4937/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945301056s of 10.004324913s, submitted: 10
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067733 data_alloc: 218103808 data_used: 270336
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a8000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072619 data_alloc: 218103808 data_used: 278528
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.752876282s of 10.019038200s, submitted: 31
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a5000/0x0/0x4ffc00000, data 0x15a80b6/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 17022976 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a6000/0x0/0x4ffc00000, data 0x15a811b/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429555893s of 10.579680443s, submitted: 16
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075239 data_alloc: 218103808 data_used: 286720
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a7000/0x0/0x4ffc00000, data 0x15a814a/0x1677000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080909 data_alloc: 218103808 data_used: 294912
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x15a9f79/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 16965632 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 16941056 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 16916480 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 16900096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.703340530s of 10.039009094s, submitted: 101
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086341 data_alloc: 218103808 data_used: 294912
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 16891904 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb5a0000/0x0/0x4ffc00000, data 0x15ad952/0x167e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb59f000/0x0/0x4ffc00000, data 0x15adaee/0x167f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089693 data_alloc: 218103808 data_used: 303104
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb59b000/0x0/0x4ffc00000, data 0x15af876/0x1682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.695872307s of 10.031254768s, submitted: 129
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095739 data_alloc: 218103808 data_used: 311296
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb596000/0x0/0x4ffc00000, data 0x15b32a5/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb597000/0x0/0x4ffc00000, data 0x15b336f/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094987 data_alloc: 218103808 data_used: 315392
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 13574144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x15b4f28/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.292222023s of 10.259329796s, submitted: 47
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb591000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101383 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688630104s of 10.714872360s, submitted: 17
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.969901085s of 11.010137558s, submitted: 5
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6bda/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104871 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6ca2/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6d07/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58f000/0x0/0x4ffc00000, data 0x15b6d05/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104743 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6ca4/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.209359169s of 12.344432831s, submitted: 13
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 13410304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6d6e/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103167 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6dd3/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896079063s of 10.000349998s, submitted: 7
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102975 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6e9d/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6f02/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106511 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Got map version 12
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b70ae/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912652969s of 10.000061989s, submitted: 11
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107429 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 13238272 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7398/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110633 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b748f/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.774273872s of 10.000308037s, submitted: 20
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b7593/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 13148160 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58b000/0x0/0x4ffc00000, data 0x15b76c0/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.892574310s of 10.000001907s, submitted: 8
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112629 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7788/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.934629440s of 10.000991821s, submitted: 10
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114461 data_alloc: 218103808 data_used: 335872
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7881/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.757849693s of 10.007835388s, submitted: 14
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119553 data_alloc: 218103808 data_used: 344064
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 13025280 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb588000/0x0/0x4ffc00000, data 0x15b9635/0x1695000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 12607488 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb577000/0x0/0x4ffc00000, data 0x15ca73b/0x16a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [1])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 11862016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 9256960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 9355264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126243 data_alloc: 218103808 data_used: 344064
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 9289728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f9f70000/0x0/0x4ffc00000, data 0x162369f/0x16fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 9240576 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.811680794s of 10.004056931s, submitted: 91
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 8683520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141655 data_alloc: 218103808 data_used: 348160
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 8454144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 6971392 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e9e000/0x0/0x4ffc00000, data 0x16f345c/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151375 data_alloc: 218103808 data_used: 356352
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 6946816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e7a000/0x0/0x4ffc00000, data 0x1715edb/0x17f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 7110656 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.935678482s of 10.000359535s, submitted: 115
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 6750208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153195 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e2a000/0x0/0x4ffc00000, data 0x176630e/0x1844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 6488064 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158005 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 6242304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9df1000/0x0/0x4ffc00000, data 0x179f4a5/0x187d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90120192 unmapped: 5128192 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dd9000/0x0/0x4ffc00000, data 0x17b7751/0x1895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 4988928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dae000/0x0/0x4ffc00000, data 0x17e1ddf/0x18c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 5349376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.467321396s of 10.002651215s, submitted: 62
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9da1000/0x0/0x4ffc00000, data 0x17f03b4/0x18cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156747 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90021888 unmapped: 5226496 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a24c1800 session 0x55e99f683a40
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172771 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92217344 unmapped: 3031040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9d20000/0x0/0x4ffc00000, data 0x186df18/0x194d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Got map version 13
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 2916352 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2285568 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 2211840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.506773949s of 10.000545502s, submitted: 270
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1949696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0x18d34ae/0x19b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167737 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 2572288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 2326528 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94035968 unmapped: 1212416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c66000/0x0/0x4ffc00000, data 0x192a11c/0x1a08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174241 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 1155072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 892928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c1e000/0x0/0x4ffc00000, data 0x1971777/0x1a50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.250415802s of 10.249304771s, submitted: 52
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184509 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 1802240 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94756864 unmapped: 1540096 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95019008 unmapped: 1277952 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94363648 unmapped: 1933312 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181477 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf263/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186381 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.373756409s of 13.546041489s, submitted: 28
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3bd/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187283 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186769 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.047278404s of 11.189574242s, submitted: 28
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187191 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf47c/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191453 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57b/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.809183121s of 10.000802040s, submitted: 15
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191277 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf51a/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190587 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf518/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.646739006s of 10.003696442s, submitted: 8
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57d/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189925 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 9133 writes, 36K keys, 9133 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 9133 writes, 2084 syncs, 4.38 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2182 writes, 7209 keys, 2182 commit groups, 1.0 writes per commit group, ingest: 7.59 MB, 0.01 MB/s#012Interval WAL: 2182 writes, 839 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bce000/0x0/0x4ffc00000, data 0x19bf54a/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190955 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.936868668s of 10.001495361s, submitted: 15
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188083 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99edc4800 session 0x55e99eaa2f00
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a038a000 session 0x55e99f863a40
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99f657800 session 0x55e9a247c000
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188387 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.810064316s of 10.060415268s, submitted: 16
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188195 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf740/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188243 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.816052437s of 10.073899269s, submitted: 6
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.514488220s of 19.649364471s, submitted: 4
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf7a6/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148883820s of 10.212854385s, submitted: 7
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf904/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189373 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfb03/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191691 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847572327s of 10.020855904s, submitted: 15
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bfb98/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191001 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 1974272 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193495 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfd88/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765477180s of 10.907700539s, submitted: 20
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfe8a/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194267 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193819 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfeba/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195779 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628976822s of 11.735780716s, submitted: 18
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 1908736 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19c0084/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bc7000/0x0/0x4ffc00000, data 0x19ca4b5/0x1aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 2654208 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208253 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b8c000/0x0/0x4ffc00000, data 0x1a04374/0x1ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95690752 unmapped: 2703360 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b52000/0x0/0x4ffc00000, data 0x1a3dfc9/0x1b1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211041 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896175385s of 10.457120895s, submitted: 145
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 2793472 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 2727936 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b15000/0x0/0x4ffc00000, data 0x1a78e5c/0x1b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96591872 unmapped: 1802240 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 1441792 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211969 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213307 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 2809856 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9aa5000/0x0/0x4ffc00000, data 0x1aec27d/0x1bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 2572288 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.398673058s of 10.845589638s, submitted: 76
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 2564096 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a8b000/0x0/0x4ffc00000, data 0x1b03811/0x1be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222355 data_alloc: 218103808 data_used: 360448
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2195456 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b404d5/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97402880 unmapped: 2039808 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b409b2/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223335 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.777762413s of 10.991305351s, submitted: 54
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227443 data_alloc: 218103808 data_used: 368640
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 1859584 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235585 data_alloc: 218103808 data_used: 380928
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98017280 unmapped: 2473984 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bb2c5a/0x1c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 2342912 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 2088960 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Got map version 14
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.176039696s of 10.044042587s, submitted: 58
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 1974272 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9982000/0x0/0x4ffc00000, data 0x1c085b8/0x1cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 2416640 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9983000/0x0/0x4ffc00000, data 0x1c08a7b/0x1ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241069 data_alloc: 218103808 data_used: 380928
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 1204224 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99196928 unmapped: 1294336 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9960000/0x0/0x4ffc00000, data 0x1c2c683/0x1d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 868352 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246425 data_alloc: 218103808 data_used: 380928
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9934000/0x0/0x4ffc00000, data 0x1c58293/0x1d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99221504 unmapped: 1269760 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.755167961s of 10.000792503s, submitted: 55
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 2129920 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 2121728 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f98eb000/0x0/0x4ffc00000, data 0x1ca0bfe/0x1d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258021 data_alloc: 218103808 data_used: 385024
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 2015232 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 2752512 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1466368 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100204544 unmapped: 1335296 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9882000/0x0/0x4ffc00000, data 0x1d094ff/0x1deb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 2154496 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255919 data_alloc: 218103808 data_used: 389120
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9880000/0x0/0x4ffc00000, data 0x1d0d156/0x1ded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 1957888 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.610513687s of 10.004332542s, submitted: 107
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 2727936 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 2678784 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263663 data_alloc: 218103808 data_used: 389120
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x1d594d1/0x1e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9831000/0x0/0x4ffc00000, data 0x1d5cab0/0x1e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268991 data_alloc: 218103808 data_used: 397312
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 2220032 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100548608 unmapped: 2039808 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.842511177s of 10.004651070s, submitted: 45
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97e8000/0x0/0x4ffc00000, data 0x1da2e8d/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97ca000/0x0/0x4ffc00000, data 0x1dc2d76/0x1ea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271023 data_alloc: 218103808 data_used: 397312
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1818624 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 1589248 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9787000/0x0/0x4ffc00000, data 0x1e04161/0x1ee7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277019 data_alloc: 218103808 data_used: 397312
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 1523712 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.390405655s of 10.003252983s, submitted: 43
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9775000/0x0/0x4ffc00000, data 0x1e15f98/0x1ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 1638400 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289409 data_alloc: 218103808 data_used: 405504
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 2088960 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f970c000/0x0/0x4ffc00000, data 0x1e7e105/0x1f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 2080768 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1105920 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [155,155], i have 153, src has [1,155]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 925696 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292805 data_alloc: 218103808 data_used: 413696
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 1744896 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912647247s of 10.003127098s, submitted: 105
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f96be000/0x0/0x4ffc00000, data 0x1ec9226/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 2678784 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104136704 unmapped: 2646016 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300925 data_alloc: 218103808 data_used: 413696
Nov 22 01:10:18 np0005531754 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f925e000/0x0/0x4ffc00000, data 0x1f18ec5/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
